Hi William,
Thank you for raising this on the community forum. I can see where this might be a problem as if your ingested email has many URLs, you will more than likely be challenged by this.
When you are submitting all these URLs, because the Action Module processes them so fast, you are getting rate limited. You may be able to get the functionality you want by making a Workflow which combines both the URLScan.io integration and also one of the fn_utilities components
Utilities: Timer
This is an excerpt from the source code detailing what it does :
This function implements a simple timer. A workflow using this function will sleep for the
specified amount of time. The function takes as input utilities_time or utilities_epoch as input.
The function periodically checks the status of the calling workflow and will end
function execution if the workflow has been terminated.
I wanted to try this out before posting but I got it to work and here is a screenshot showing the Workflow.
In the above example, I am using the Utilities Timer to delay each submission by 5 seconds followed by invoking the default UrlScanIO workflow unmodified. This is a great example of how you can encapsulate a workflow within another to resolve an issue like this. If you try this out, keep an eye on your resilient_circuits log and you should see the sleep timer working and then the URL submission right after.
Let us know if this works for you or any questions.
------------------------------
Ryan Gordon
Security Software Engineer
IBM
------------------------------
Original Message:
Sent: Fri September 13, 2019 04:39 PM
From: William Chen
Subject: URLScan.io rate limiting
Hi,
I have the URLScan.io integration installed, everything is configured right and url's that come in emails we receive are automatically sent to URLScan.io to be scanned.
However, I learned that URLScan.io has rate limiting in place so their resources arent overwhelmed by misconfigured scripts. So they only accept 1 new scan every 2 seconds. The way i have it configured, Resilient pulls all URLs out of the email at once and pushes them out to get scanned (details below). When this happens, the first 1 completes fine and the rest error out. Has anyone else come across the same issue? Any tips on getting around this?
We still use IRHub instead of the new email function, and IRHub is set to add the email body as the Incident.Description. I use in-product scripting with regex to find all URL's in the description and add them as artifacts, then a separate rule with the conditionto call urlscan function with the condition that an artifact is created.
------------------------------
Wub a lub a dub dub!
------------------------------