TRIRIGA

 View Only
  • 1.  TRIRIGA and Splunk? Other SIEM?

    Posted Tue March 12, 2024 12:49 PM

    Has anyone integrated TRIRIGA logs with Splunk? We are looking into this, but I'm curious if anyone has any tips or lessons learned. Anything from another SIEM provider, perhaps?

    Thanks!



    ------------------------------
    Jeff George
    ------------------------------


  • 2.  RE: TRIRIGA and Splunk? Other SIEM?

    Posted Tue March 12, 2024 01:58 PM

    Hi Jeff,

     

    We use Splunk for log file analysis and are just starting to send metric into Splunk HEC (JSON file ingestion) for additional monitoring of DB and OS metrics.  While this is more of overall operations monitoring and failure analysis, security items could be treated in the same manner, probably using the security log more so than some of the other logs.

     

    Not sure what areas your looking for pointers on, but we ingest the system, security and ffdc (Liberty) logs.   I have built quite a few dashboards focused on particular user use cases: (developer – error log keyword searching, counts, etc., infrastructure – mostly counts, logins, failures by type, etc., errors, etc.).  

     

    One key idea that might be helpful when using Splunk, if you have multiple environments (e.g. PROD, QAS, DEV, sandboxes, etc.) and/or multiple servers per environment, is to use Splunk "categories" for tagging attributes.  Tagging such things as

    "environnement" – PROD, QAS, DEV, etc.,

    "server_type » - APP, PROCESS,

    "server_subtype" – (more for multiple process servers) such as ASYNC (for APP servers), Un-named, REM, CAD, BIRT (if you have multiple Process servers with named users)

    Location – (if you have environments in various locations, you could use the company location code, state, etc.)

    (other categories as you see fit)

     

    These are all built into a Splunk category field (for each server's assetinfo.json file, used be the Universal Forwarder) so that each log push to Splunk has these attributes along with the log data.

     

    When building your Splunk dashboards, you can use input selectors for each of the categories (above) to filter the logs you want to show/aggregate.  For instance, having an environment selector on a Splunk dashboard will let you select all of the "PROD" server logs (all consolidated by time) into one table of entries (searching for say "error").  This will show you all error log records  (with "error" in it) from all the PROD servers, all together in chronological order.  (this would be very time-consuming, if at all possible, if you were to look at each servers log files separately)...

    This allows us to correlate errors, issues BETWEEN the multiple servers, in our PROD environment.    (this is just one example of how using the categories help you to group log data in Splunk).

     

    In your case, you could use the security log, in the same way (above) looking for authenticated users, etc. (for security)

     

    I know this will probably (hopefully) spawn more dialog J   (always looking for ways to monitor/analyze Tririga)

     

    Regards,

    Lester

     

    Lester Drazin

    Lockheed Martin Corporation

     

     






  • 3.  RE: TRIRIGA and Splunk? Other SIEM?

    Posted Tue March 12, 2024 04:40 PM

    Update – this could also be referring to MTU size on the network devices.  

     






  • 4.  RE: TRIRIGA and Splunk? Other SIEM?

    Posted Wed March 13, 2024 11:27 AM

    Hi Jeff!

    Splunk is a good choice for continuous monitoring 24/7 with triggers, sending out email alerts, etc. Lester wrote a great response so I won't go into more detail on that.

    Since your question stated "Other SIEM" I thought I would mention we also created a react app to tail logs:

    Pros:

    • You can search
    • Near real time tailing
    • Performs context highlighting.
    • You can create a unlimited number of saved searches to find common issue quickly
    • Export filtered results

    Cons:

    • It only views the TRIRIGA logs though
    • It only tails logs related to the current server you are using (App or Process, not both)

    It is much lighter than Splunk but I actually use both.

    • Splunk's real-time tailing was disabled for us for some reason. I'm not sure if it costs more or why any Splunk administrator would care. 
    • I use the react log viewer to know when my Object Migrations are importing and when testing development changes or platform upgrades. I want to see when an error occurs as I use the application and that is easy to do with live tailing. Splunk can do this but you need to get that real-time feature going.
    • Because it is a UX app every environment we use can have the react log viewer without configuring Splunk or consuming licenses.

    Again, if you want to monitor without a human you definitely need something like Splunk and set up alerts. 

    Mike



    ------------------------------
    Michael Pemberton
    ------------------------------



  • 5.  RE: TRIRIGA and Splunk? Other SIEM?

    Posted Wed March 20, 2024 03:42 PM

    Hi Jeff:

    We use the Splunk file forwarder for the security.log files that sends the file once a day for failed login information. Splunk's Data Connect is a JDBC tool to query the Database. Splunk reads (nightly) custom queries (from custom database views) for Metadata Changes, Session Log, permission log, admin change log (power user query of the audit tables), and a user account log.

    Custom workflows were created to generate the data like permission and session logs.  When a user is activated, a custom log of group permissions is added, existed, or deleted log entry is created. The Metadata change logs the changes all changes to BOs, Forms, Queries, groups, workflows, and other meta data objects. The DBA created a Database audit trail view that is also read by Splunk.  

    One of the challenges was creating Splunk's "rising column" unique identifier for each record type.  This is a sort mechanism for Splunk to find the changes since the last query.  Only using modified date (in milliseconds) did not meet the requirements and the "counter" is a concatenation of a few unique components.  The queries used modified date+rownum.  For the user log that had multiple rows for each user (join for each group), we used the SQL "ROW_NUMBER () OVER (PARTITION BY p.triusernametx ORDER BY p.triusernametx)" as and additional concatenated component of the counter to make each row unique. 

    Hope this helps.

    Charlie



    ------------------------------
    Charles McGarvey
    ------------------------------