Thanks, @Robert Krauss. Sounds like a plan. That's what I'm doing—without the automation: I manually download the files with Azure Storage Explorer and then I group them in sets of 10 days. I have to do that since they are so big, that they exceed the maximum size of an Excel spreadsheet. Then I loaded the merged files, again by hand.
I'll start with configuring the local Datalink agent to send the files, and then I'll worry about the BLOB file access.
Sounds like a job for my alter ego,
Update 05Nov2021:
1) Still working on getting the files via CURL, or Python programs.
2) I wrote a program in Python that gets the files from the download area and compacts them as an output file.
3) Created a Datalink File System connector that picks up the files, uploads them to CT and then archives them afterwards.
Not 100% automated, but it's A LOT less labor-intensive.
#CostingStandard(CT-Foundation)