Considering the screenshot around 2.8TB is taken by ariel only. Which includes payload,records and indexes.
Can you please confirm what is the Payload Index Retention is set for you? chances are this is taking quite huge space.
Try du -sh /store/ariel/events/records/2024/5/8/9 And confirm, what is the size taken by lucene / super directories. If its consuming huge space, Payload Index Retention is taking huge space. You might need to reduce the retention of index.
You can search same for other hourly directories as well to gain more insight on what is causing huge utilization.
Original Message:
Sent: Wed May 08, 2024 02:32 PM
From: Jonathan Pechta
Subject: Event Processor Disk Storage fills up with no reason
@John Dawson
This looks like a similar post in Reddit too where I asked about both indexes and persistent_queue. I'm not 100% sure if the question is regarding /store/ariel/events/payloads or just /store overall growing when the event rate is consistent.
I responded in detail here: https://www.reddit.com/r/QRadar/comments/1cmdhn4/comment/l35zn3o/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
------------------------------
Jonathan Pechta
IBM Security - Community of Practice Lead
jonathan.pechta1@ibm.com
Original Message:
Sent: Wed May 08, 2024 12:53 PM
From: John Dawson
Subject: Event Processor Disk Storage fills up with no reason
Were any indexes enabled recently?
What is the current setting for ariel index retention?
Doe the client use quick filter searches?
I can see from the screen shots that the size of the records folder for April is much bigger than March, however the size of the payloads folder is actually smaller.
This would indicate some indexes being enabled possibly these were enabled in mid March. Ariel indexes are stored in the payloads folder. You can verify what indexes are enabled through the Admin Tab and Index Management.
Thanks
------------------------------
John Dawson
Qradar Support Architect
IBM
Original Message:
Sent: Wed May 08, 2024 06:57 AM
From: Simone Tacchella
Subject: Event Processor Disk Storage fills up with no reason
The size of store partion is 3.2T, the situation was already difficult given the insertion of DNS logs at the perimeter. By setting a retention period of 3 months instead of the previous 6 and doing a routing routine to drop the largest events, the situation was stable, that is, the stored events were the same as those deleted, from a few weeks to 88%. In the past two weeks the disk has increased by 1% reaching 90%. We expect it to rise by the same amount this week too but we don't explain why, there are no sources that have increased the number of logs.The first increase occurred from 9 PM to 12 AM while the second from 11 PM to 2 AM. I launched the commands from the previous comment of which I attach screenshots
------------------------------
Simone Tacchella
Original Message:
Sent: Wed May 08, 2024 06:47 AM
From: John Dawson
Subject: Event Processor Disk Storage fills up with no reason
Hi Simone,
Couple of questions
- What is the current size of the /store partition?
- What is the current utilisation of the /store partition?
- Is there a specific time that it fills at night?
- What are your event retention settings?
Thanks
------------------------------
John Dawson
Qradar Support Architect
IBM
Original Message:
Sent: Tue May 07, 2024 10:24 AM
From: Simone Tacchella
Subject: Event Processor Disk Storage fills up with no reason
Hi everyone, I'm asking for help as I'm trying to understand how it's possible that my client's event processor disk keeps increasing the /store partition (we're at 90%) even though there have been no new sources added or sources that have sent more logs than they should have. A workaround had been done previously by dropping a series of events, but suddenly at night for 3/4 hours a part of the disk fills up even though there are no peaks on the processor, console and collector graphs. Could you recommend some troubleshooting to see why the disk keeps filling up?
Your help will be so precious thank you
------------------------------
Simone Tacchella
------------------------------