Organizations are highly dependent on their cloud infrastructure and deploy their maximum workload on it. Variety of services and resources are used daily to support their day to day operations. In this scenario, cloud security and performance become key areas which require maximum focus. It is necessary to make sure that some crucial steps are taken by the users which boost the security and performance of their cloud environment. In this context, Amazon Redshift disk usage space needs to be taken care of time to time in order to maintain a minimum acceptable level of cloud performance and security.
What is Amazon Redshift?
Redshift is a fully managed data warehouse service from AWS which is designed to handle the analytics workload of the user and connects to standard SQL databases and other business intelligence tools. Redshift has the ability to deliver fast query and I/O performance for any dataset of any size. The first step to use Redshift is to launch a set of nodes, called Redshift cluster. Redshift uses columnar storage technology and parallelizes and distributes queries across multiple nodes. Users can start by using a few hundred gigabytes of data and can scale up to a petabyte or more. This allows organizations to capture and process new analytical insights and business logic for themselves and their customers.
Understanding Amazon Redshift disk usage space and the need to analyze it
It becomes important to monitor disk allocation of Redshift clusters. If the Redshift disk usage space gets allocated to a certain limit, there are some chances of performance and I/O hiccups. AWS sets a threshold limit of 90% of disk usage allocated in Redshift clusters. Once the disk gets filled to the 90% of its capacity or more, certain issues might occur in your cloud environment which will certainly affect the performance and throughput.
Centilytics comes into the picture
Centilytics provides useful insight into your Amazon Redshift disk usage space along with severity recommendations. This allows users to take note of available disk allocation space of all of the Redshift clusters existing in the user’s infrastructure.
There can be 2 possible scenarios:
|OK||This indication will be displayed when disk space allocation of corresponding Redshift cluster is less than 90%.|
|CRITICAL||This indication will be displayed when disk space allocation of corresponding Redshift cluster is more than or equal to 90%|
Description of further columns are as follows:
- Account Id: This column Shows the respective account ID of the user’s account.
- Account Name: This column shows the corresponding account name to the user’s account.
- Region: This column shows the region in which the corresponding Redshift cluster exists.
- Identifier: This column shows the name of the corresponding Redshift cluster
- Cluster Node type: This column shows the type of cluster node being used in the corresponding Redshift cluster.
|Account Id||Applying the account Id filter will display data for the selected account Id.|
|Region||Applying region filter will display data according to the selected region.|
|Severity||Applying severity filter will display data according to the selected severity type i.e. selecting critical will display all resources with critical severity. Same will be the case for warning and ok severity types|
|Resource Tags||Applying resource tags filter will display those resources which have been assigned the selected resource tag. For e.g.- If the user has tagged some resource by a tag named environment, then selecting an environment from the resource tags filter will display all the data accordingly.|
|Resource Tags Value||Applying resource tags value filter will display data which will have the selected resource tag value. For e.g.- If the user has tagged some resource by a tag named environment and has given it a value say production (environment: production), the user will be able to view data of all the resources which are tagged as “environment:production”. The user can use the tag value filter only when a tag name has been provided.|