AWS Cloudtrail is a service that helps to track user activity and API usage, enabling compliance, governance, risk auditing, and operational auditing of your AWS infrastructure. The service helps to simplify auditing troubleshooting and compliance. Users can review the logs with the help of a CloudTrail Event History. It can also set it up to deliver the reports to S3 buckets and optionally report to the CloudWatch Events and Logs for even more robust monitoring of your AWS resources. To know more about AWS CloudTrail, you should read my article that will tell you all you need to know about CloudTrail. In this article, we will take a look at some of the best practices you should follow while using AWS CloudTrail for better security and cost optimization.
Some of the basic CloudTrail functionalities
- Provide an event history of account activity
- Provide visibility into user and resource activities.
- Simply compliance audits
- Track and automatically respond to security threats
- Log, retain account activity, and continuously monitor.
Best practices for AWS Cloudtrail
Let us take a look at some of the AWS CloudTrail best practices.
Cloud resources are ephemeral, which makes it highly difficult to keep an actual track of assets. According to a report, a cloud resource’s average lifespan is two hours and seven minutes only. Additionally, many companies have platforms or environments which involve multiple cloud regions and accounts. This leads to proper decentralized visibility and makes it tough to detect the risks. Moreover, you can’t secure what you can’t see.
Use a cloud security solution that offers visibility into the types and volume of resources across multiple cloud accounts and regions in a single pane of glass.
Exposed root node
Your root node accounts can do the most harm when any unauthorized parties acquire access to them. Admin often forgets to disable the root API access.
No one should have access to your Amazon Web Service root account the majority of the time, not even your top admins. Never share them across the applications and users. Root accounts must be protected by 2FA and used as sparingly as possible.
Not changing IAM access keys
IAM Access keys are often not rotated. That weakens up the IAM’s ability to secure your groups and user accounts, thus giving cyber attackers a longer time window to acquire them. Additionally, it ensures that old keys are not being used to access some of the critical services.
Rotate or change your access keys at last once every 90 days. If you have given the users some of the necessary permissions, they can rotate their own access keys.
Poor authentication practices
Stolen or lost credentials are a significant cause of cloud security incidents. It is common to find access credentials to public cloud environments exposed on the internet, as was the case in the data security breach. Organizations need a proper way to detect account compromises.
Strong password policies and 2FA should be enforced on the AWS platform. Amazon recommends enabling the 2FA for all the accounts that have a console password.
Too many privileges
Stolen or lost credentials are a significant cause of cloud security incidents. AWS IAM can be developed and deployed to manage all user groups or accounts, with detailed policies and permissions options.
Moreover, admins often assign overly permissive access to AWS resources. That enables the users to make some changes and have access that they should not be allowed to have, but if a cyber-attacker acquires their account, more harm can be done.
Like any other user permission system, your IAM configuration should comply with the principle of least privileges. That means any user and group should only have the permissions required to perform their jobs and no more.
Restrict access to the CloudTrail bucket logs and use of 2FA for the bucket deletion
Unrestricted access, even to the admins, increases the risk of authorized access in the case of the stolen credentials following a phishing attack. If the AWS account becomes compromised, 2FA authentication will make it more difficult for the hackers to delete some of the evidence of their actions and conceal their presence.
Enable CloudTrail Log File Validation
Apart from delivering the cloudtrail events for your AWS S3 bucket, you can also instruct the cloudtrail to create a digest file for your log files and deliver them to the same AWS S3 bucket.
You can even use the digest file to validate your cloudtrail log file integrity. You can also make sure the cloudtrail log files are not being tampered with after it was delivered to your AWS S3 bucket. The log file validation is done using the SHA-256 for hashing and SHA-256 with the RSA for the digital signing.
Use Auto Scaling to dampen DDoS effects
Amazon AutoScaling helps ensure that you have the correct number of Elastic Compute 2 instances available to handle your application’s high load density.
You can even create a collection of your EC2 instances, called by the name Auto Scaling Groups. You can also specify the minimum number of instances in each account and group, and Amazon Auto Scaling makes sure that your group never goes below that size.
Enable CloudTrail in All Regions
When you create an AWS Cloudtrail, you can create it for one region or even for all the regions in your AWS user account.
Even when you are putting your entire workload in only one region, you should still enable the CloudTrail All AWS regions as a best practice. This way, when activity happens in any other region, other than your primary working region, you can track them and take action immediately.
How Can Cloudanix Help?
Cloudanix helps you to implement these AWS CloudTrail best practices. We provide you with a recipe for AWS CloudTrail which audits your AWS account and lets you know if you are not implementing the best practices. Furthermore, by implementing these best practices, you also adhere to compliance standards like NIST, HIPAA, MAST, GDPR, and so on. Sign up for your free trial today!