IBM RPA allows you to automate database operations and interact with the databases. In this part 2, we will focus on more advanced commands and approaches, as well getting deeper into some of the previous topics.
We will demonstrate how to export data, as well as importing data and transferring data. These approaches can be useful for report generation, integrate information from a supplier (or another software), or even integrate applications. If you combine with our web automation, the possibilities are endless.
For achieving these goals, we will take a deeper look into other commands and approaches, like how to import JSON, CSV and XLS files with custom mappings, data export, data transfer, mass operations on datasets and dynamic table creation.
Just as a reminder, IBM RPA has support out of the box for the most popular relational databases: Microsoft SQL Server, Oracle, PostgreSQL, MySQL and SQLite. You can also interact with databases using the ODBC driver, if you have the respective driver installed, you will be able to interact with them.
Let’s start by expanding data tables, talk about bulk commands, as well as importing files as Data Table. These approaches are essential for the proposed scripts.
Using Data Tables
Whenever we run commands like SQL Reader, Run SQL Procedure Reader or JSON to Table, one of the outputs is a Data Table. Let’s examine this type, as it will be used for many operations.
Data Table is a powerful and flexible data type in WAL, represents data in memory. It’s generic, so you can use it to decouple data from databases, files or even create records in memory.
Since the data is decoupled from its source, you can perform all sorts of operations:
• Perform calculations over it
• Add calculated columns (check our previous blog post for “expression”)
• Append, modify and remove data
• Convert, export
• All of this without affecting the source data.
In our toolbox, you will see in “Data Table” many different operations available:

Import JSON, CSV and XLS files
Not only can you use our database reader commands for getting a Data Table, but there are also other approaches for getting one. IBM RPA have an especially useful set of commands for reading popular formats like JSON, CSV and XLS (Excel). Using any of these commands, you can transform the input into a Data Table, and from there you might perform data transformation, as well as database operations.
JSON to Table
Using this command, you can use any JSON input and convert it to Data Table. One useful aspect of this command is the JSONPath Expression, you can use this for selecting data in many ways. In our example here, we are getting every row from the data object ($.data).

Read CSV File
This command will read your CSV (comma separated value) file and create a data table. You can configure the character delimiter, as well as skip the file header.

Open Excel File / Get Excel Table
For opening and converting a XLS (Excel) file, you need to combine the commands “Open Excel File” with “Get Excel Table”. The first command will create an instance of the XLS file, and from there, you can use “Get Excel Table” and select the target spreadsheet.

You can use “Get Excel Table” to simply get the first spreadsheet completely, or you can be specific about the target spreadsheet and range.


Bulk Operations
Bulk commands allow you to perform mass operations using a dataset. Using these commands, you can very easily import or update a dataset in your database. You don’t need to specify any SQL statement, as IBM RPA will internally do that for you.
Besides being simple to use, one of the big advantages is that it’s a high-performance command, and IBM RPA will use the best approach for each database. You can rely on these commands to get the best practices.

These are the current available commands for bulk operations:
• Bulk Insert: Insert data in bulk from a data source to a database table, useful for importing data
• Bulk Update: Updates data in bulk from a data source to a database table
• Bulk Delete: Deletes data in bulk in a database table based on the data in a data source
• Bulk Merge: This command runs a merge operation where it inserts new data and updates existing data in bulk from a data source to a database table (“Upsert”).
• Bulk Sync: This command runs a mirror synchronization from the data source to the database table. It runs the following operations:
• UPDATE: Updates every row in which the value in the data source matches the database table row value. The data source needs at least one value in the row that matches the database table row value.
• INSERT: Inserts every nonmatching row that exists from the data source to the database table.
• DELETE: Deletes every nonmatching row that exists in the database table only.
Custom Mappings
Our bulk commands allow you to map source columns to the destination table, making it easy to adapt different datasets and perform your operations.
In the example below, the data source contains the columns Key, Person_Name, and Origin, which are mapped to the destination columns Id, Name, and Country.

Combining commands
Exporting data
In a scenario where you need to generate a report, you might need to export your data. So, let’s see how we can achieve that.
We can export data by combining the following commands:
• Run SQL Reader (sqlExecuteReader): this command executes a SQL query and exports the results as a Data Table.
• From this point on, the data is decoupled, and commands will operate from memory.
• Write Table to File (writeTableToFile): this command will export this data table to a file, using csv format

You can transform your data using data table operations, as it’s already decoupled from source. After transforming and exporting your data, you have many different options:
• Create a message and send it via email.
• Copy to a folder in your organization
• Upload to your storage provider, like Dropbox, Google Drive or Microsoft OneDrive
• Upload to your FTP server
Importing data
You might have scenarios where you need to import information from a supplier, or another software
We can import data by combining the following commands:
• Read CSV File (readCSV): this command will read a file and generate a Data Table as output.
• From this moment on, we are decoupling this data, commands will perform from memory.
• If you need to perform calculations or any data transformations, you can safely do that now.
• Bulk Insert (sqlBulkInsert): this command will import this data table into your database.

Transferring data
Consider an integration, where you need to read information from an external database, transform data and import to your database. You can do that with just a few commands:
• Connect to origin server
• Use Run SQL Reader for decoupling data
• Since data is in memory, you can perform any modifications you want
• Connect to the target server – it can even be a different provider.
• Use Bulk Insert to import data from memory

Dynamic table creation
If you are reading data from external places (like a supplier list), you might not know the exact data structure. Using IBM RPA commands, you can use a script that will adapt to the actual data:
• Using “Get Column Structure”, you can retrieve information about your CSV or Excel table.
• By combining this command with the previous approaches, you can dynamically create a table and perform your operations.
• In the example below, we are iterating over each column for creating the SQL statement of our table
• After creating the expression, we use “Run SQL Scalar” to create the table in our database.
• Lastly, we will import the CSV file into our database

Conclusion
IBM RPA offers powerful and flexible tools for interacting with databases, enabling automation of complex data workflows with ease. From importing and exporting data in various formats to executing bulk operations and dynamically creating tables, the platform empowers developers to build scalable, efficient, and maintainable automation solutions. By leveraging these capabilities, organizations can streamline data handling, reduce manual effort, and enhance integration across systems.