Here are some troubleshooting tips for ensuring AWS SNS messages get to S3.
- First create your Firehose using Direct PUT. Select your S3 bucket and configure prefixes and other options so you can identify your records within the bucket.
- Test your firehose can get messages into S3. There is a button right on the AWS console to do this. For default prefixes, you’ll see in your bucket keys for /YYYY/MM/DD/HH with new files bundling several messages by timestamp. Note: this may take up to five minutes to show up if you didn’t play with the buffer settings!
- If your messages aren’t arriving, check that the bucket can receive messages from Firehose. You might also take toss time to configure the lifetime of objects in the bucket. If they’re logs or things with a defined useful lifetime, you might want to set a retention period to clear them out, move them to Glacier, etc.
- Once you’ve got messages flowing, you should be able to create your SNS subscription to your Firehose now. You’ll need an IAM Role with Trust to SNS and Firehose access. The built-in policy for this is great for testing purposes, but pare access down using a custom policy before going live. Once again, you might have to wait up to five minutes if you didn’t play with the buffer amount to get them to transfer immediately.
- If logs still aren’t arriving, check your SNS message log failures. You can do this by editing the Topic and configuring a role that has CloudWatch access (for writing to CloudWatch logs). Then look for the sns/region/account/Failure key in CW and check for errors in the logging json.
- As an example, I selected a Kinesis Data Stream instead of Direct PUT when setting up the firehose. My SNS messages failed to send, but I could push things to S3 using the test button in the Kinesis console. My SNS failures showed up in the CW logs as “400 This operation is not permitted on KinesisStreamAsSource delivery stream type”. This might not be your exact error, but the failure logs will illuminate other errors too.