←back to thread

391 points kinj28 | 1 comments | | HN request time: 0s | source

Could there be any link between the two events?

Here is what happened:

Some 600 instances were spawned within 3 hours before AWS flagged it off and sent us a health event. There were numerous domains verified and we could see SES quota increase request was made.

We are still investigating the vulnerability at our end. our initial suspect list has 2 suspects. api key or console access where MFA wasn’t enabled.

Show context
timdev2 ◴[] No.45658257[source]
I would normally say that "That must be a coincidence", but I had a client account compromise as well. And it was very strange:

Client was a small org, and two very old IAM accounts had suddenly had recent (yesterday) console log ins and password changes.

I'm investigating the extent of the compromise, but so far it seems all they did was open a ticket to turn on SES production access and increase the daily email limit to 50k.

These were basically dormant IAM users from more than 5 years ago, and it's certainly odd timing that they'd suddenly pop on this particular day.

replies(3): >>45659427 #>>45663754 #>>45672001 #
LeonardoTolstoy ◴[] No.45663754[source]
Almost this exact thing happened to me about a year ago. Very old account login, SES access with request to raise the email limit. We were only quickly tipped off because they had to open a ticket to get the limit raised.

If you haven't check newly made Roles as well. We quashed the compromised users pretty quickly (including my own, the origin we figured out), but got a little lucky because I just started cruising the Roles and killing anything less than a month old or with admin access.

To play devil's advocate a bit. In our case we are pretty sure my key actually did get compromised although we aren't precisely sure how (probably a combination of me being dumb and my org being dumb and some guy putting two and two together). But we did trace the initial users being created to nearly a month prior to the actual SES request. It is entirely possible whomever did your thing had you compromised for a bit, and then once AWS went down they decided that was the perfect time to attack, when you might not notice just-another-AWS-thing happening.

replies(1): >>45671974 #
1. timdev2 ◴[] No.45671974[source]
Thanks for sharing. After digging in, it appears that something very similar happened here, after all. It looks like an access key with admin role leaked some time ago. At first, they just ran a quiet GetCallerIdentity, then sat on it. Then, on outage day, they leveraged it. In our case, they just did the SES thing, and tried to persist access by setting up IAM Identity Center.