Modern IT environments are dynamic and rapidly changing. If you were to take a snapshot of your current IT environment (software, cloud resources, endpoints, routers, switches, servers, databases, etc) and six months later, do a comparison, you would probably find the pictures are drastically different. This kind of rapid change is what makes securing the environments so difficult. And this challenge is ever present when it comes to securing data. Authentication and access to data is dynamic, not static. Each day organizations create and store new pieces of data critical to their operations and the people, teams, and applications that use that data change on a near-hourly basis.
The traditional approach to securing data has been to identify which connections to databases are "trusted" and exclude monitoring those connections. For example, if I have a trusted front-end application (source application and client IP address) that uses a backend database, we expect the interactions between the application and the database to be secure and trusted. Excluding trusted connections drastically reduces the amount of database activity that needs to be collected and analyzed.
Seems good, right?
Well, remember earlier when I talked about the rapid and dynamic nature of the IT environment? IP addresses and ports change. People leave the company. Servers are relocated to different datacenters. Suddenly, that static list of trusted connections that you worked so hard to cultivate and keep up to date becomes an impossible game of Whac-A-Mole. Additionally, you find there are security gaps - like threat actors being able to piggyback over trusted connection channels (similar to how DNS tunneling averts firewall rules). And then to add insult to injury, your boss says the company is adopting a Zero Trust approach to security. This idea of "trusting" connections is no longer valid because it goes against the spirit of Zero Trust security (where trust is continuously and dynamically assessed with every transaction). Now what to you do?
This is why Guardium has introduced Real-time trust evaluator for data.
Here's how it works:
Each time a new database session is established, that session is analyzed in real-time for risk. No connection is explicitly trusted (unless you want it to be). This an important departure from the old ways of trusting connections and supports a Zero Trust approach. The more risky the session, the more that session is monitored in greater detail. Risk is based on multiple contextual data points in and around the session itself. For example, if an application were to connect to a database using credentials in clear text, this would greatly increase the riskiness of that session. Or if a session were established using a weak form of encryption, this would increase risk of that session. Other risk-contributing data points come through machine learning. The connections probability engine learns what is normal for a given database server and then, in real-time, analyzes every connection and compares it against the machine learning model of that database server. An example use case is credentials stuffing. Let's suppose a threat actor comes armed with a trove of credentials from a previous data breach and is attempting to stuff those credentials in a database to gain access. Guardium's machine learning can detect that activity (even if the traffic is encrypted) and dynamically adjust the risk level of the session to indicate high risk activity.
As stated before, the risk of a given session will determine the level of monitoring (collection and analysis) applied to the session. As risk increases, the fidelity of event collection increases. This is analogous to a state of the art security camera.
Imagine you were placed in charge of protecting the assets of a bank vault. Monitoring the vault is a high definition security camera - equipped with motion detection, facial recognition, microphone, and night vision. The camera detects motion, so it starts recording in low fidelity - 1080p, 30 frames per second. Turns out, it's just a bank employee walking by, performing their normal duties. Later that day, the camera detects motion at midnight, when the bank is closed. The camera switches to night vision and starts recording (medium risk: 1080 pixels, 60 frames per second). A person dressed in black enters the frame. The person lowers a bag to the floor, produces a torch, and holds it to the vault door (high risk: 4k pixels, 60 frames per second, audio recording on).
In this analogy, the security camera was setup to monitor identity and access for both internal and external users It also progressively captured more data as the risk level increased. This is important for several reasons:
1) It drastically reduced noisy/useless footage that someone had to look through
2) It dynamically captured more details and evidence where needed and based off the riskiness of the scenario
3) It reduced the cost of the security infrastructure: local storage to capture the footage, bandwidth when transmitting the footage, archival storage, CPU/processing, electricity, etc.
The same is true in the database monitoring world. You want a solution that doesn't explicitly trust connections to the database. You want something that can dynamically change the fidelity of monitoring based on the risk of the connection - in real time. You want something that is going to cut down on cost and reduce the noise for your security practitioners.
Which approach will you choose?
TL;DR:
Guardium Real-time Trust Evaluator delivers a dynamic, real-time, risk-based approach to monitoring data in support of Zero Trust methodology. The old ways of trusting connections aren't so great.
Ready to try Guardium Real-time Trust Evaluator? Read more here:
Want to speak to a member of the Guardium team? Comment below or send me a direct message through IBM Community.
#Highlights-home#Highlights