Showing posts with label Automation. Show all posts
Showing posts with label Automation. Show all posts

Tuesday, July 16, 2024

Troubleshooting SNS to S3 Via Kinesis Firehose

 Here are some troubleshooting tips for ensuring AWS SNS messages get to S3.

  • First create your Firehose using Direct PUT. Select your S3 bucket and configure prefixes and other options so you can identify your records within the bucket.
  • Test your firehose can get messages into S3. There is a button right on the AWS console to do this. For default prefixes, you’ll see in your bucket keys for /YYYY/MM/DD/HH with new files bundling several messages by timestamp. Note: this may take up to five minutes to show up if you didn’t play with the buffer settings!
  • If your messages aren’t arriving, check that the bucket can receive messages from Firehose. You might also take toss time to configure the lifetime of objects in the bucket. If they’re logs or things with a defined useful lifetime, you might want to set a retention period to clear them out, move them to Glacier, etc.
  • Once you’ve got messages flowing, you should be able to create your SNS subscription to your Firehose now. You’ll need an IAM Role with Trust to SNS and Firehose access. The built-in policy for this is great for testing purposes, but pare access down using a custom policy before going live. Once again, you might have to wait up to five minutes if you didn’t play with the buffer amount to get them to transfer immediately.
  • If logs still aren’t arriving, check your SNS message log failures. You can do this by editing the Topic and configuring a role that has CloudWatch access (for writing to CloudWatch logs). Then look for the sns/region/account/Failure key in CW and check for errors in the logging json.
  • As an example, I selected a Kinesis Data Stream instead of Direct PUT when setting up the firehose. My SNS messages failed to send, but I could push things to S3 using the test button in the Kinesis console. My SNS failures showed up in the CW logs as “400 This operation is not permitted on KinesisStreamAsSource delivery stream type”. This might not be your exact error, but the failure logs will illuminate other errors too.
I’m just getting started with Firehouse. But these troubleshooting steps helped me configure an SNS to S3 pipeline without building my own lambda or running something else like Logstash.

Tuesday, June 12, 2018

Quotes from Dan Kaminsky's Keynote at DEF CON China


Above is Dan Kaminsky's keynote at the inaugural DEF CON China.  It was nominally about Spectre and Meltdown, and I thought it was immediately applicable to testing at all levels.  Here are some moments that jumped out at me:

On Context:

"There's a problem where we talk about hacking in terms of only software...What does hacking look like when it has nothing to do with software." 1:55

"But let's keep digging." Throughout, but especially 5:40

"Actual physics encourages 60 frames per second. I did not expect to find anything close to this when I started digging into the number 60...This might be correct, this might not be. And that is a part of hacking too." 6:10

"Stay intellectually honest as go through these deep dives. Understand really you are operating from ignorance. That's actually your strong point. You don't know why the thing is doing what it is doing...Have some humility as you explore, but also explore." 7:40

"We really really do not like having microprocessor flaws...and so we make sure where the right bits come in, the right bits come out. Time has not been part of the equation...Security [re: Specter/Meltdown] has been made to depend on an undefined element. Context matters." 15:00

"Are two computers doing the same thing?...There is not a right answer to that. There is no one context. A huge amount of what we do in hacking...is we play contexts of one another." 17:50

[Re: Spectre and Meltdown] "These attackers changed time which in this context is not defined to exist...Fast and slow...means nothing to the chip but it means everything to the users, to the administrators, to the security models..." 21:00

"Look for things people think don't matter. Look for the flawed assumptions...between how people think the system works and how it actually does." 35:00

"People think bug finding is purely a technical task. It is not because you are playing with people's assumptions...Understand the source and you'll find the destination." 37:05

"Our hardest problems in Security require alignment between how we build systems, and how we verify them. And our best solutions in technology require understanding the past, how we got here." 59:50

On Faulty Assumptions:

"[Example of clocks running slow because power was not 60Hz] You could get cheap, and just use whatever is coming out of the wall, and assume it will never change. Just because you can doesn't mean you should...We'll just get it from the upstream." 4:15

"[Re: Spectre and Meltdown] We turned a stability boundary into a security boundary and hoped it would work. Spoiler alert: it did not work." 18:40

"We hope the design of our interesting architectures mean when we switch from one context to another, nothing is left over...[but] if you want two security domains, get two computers. You can do that. Computers are small now. [Extensive geeking out about tiny computers]" 23:10

"[RIM] made a really compelling argument that the iPhone was totally impossible, and their argument was incredibly compelling until the moment that Steve Jobs dropped an iPhone on the table..." 25:50

"If you don't care if your work affects the [other people working on the system], you're going to crash." 37:30

"What happens when you define your constraints incorrectly?... Vulnerabilities. ...At best, you get the wrong answer. Most commonly, you get undefined behavior which in the presence of hacking becomes redefinable behavior." 41:35

"It's important to realize that we are loosening the assumption that the developer knows what the system is supposed to do...Everyone who touches the computer is a little bit ignorant." 45:20

On Heuristics

"When you say the same thing, but you say it in a different time, sometimes you're not saying the same thing." 9:10

"Hackers are actually pretty well-behaved. When hackers crash code...it does really controlled things...changing smaller things from the computer's perspective that are bigger things from a human's perspective." 20:25

"Bugs aren't random because their sources aren't random." 35:25

"Hackers aren't modeling code...hackers are modeling the developers and thinking, 'What did [they] screw up?' [I would ask a team to] tell me how you think your system works...I would listen to what they didn't talk about. That was always where my first bugs came from." 35:45

On Bug Advocacy

"In twenty years...I have never seen stupid moralization fix anything...We're engineers. Sometimes things are going to fail." 10:30

"We have patched everything in case there's a security boundary. That doesn't actually mean there's a security boundary." 28:10

"Build your boundaries to what the actual security model is...Security that doesn't care about the rest of IT, is security that grows increasingly irrelevant." 33:20

"We're not, as hackers, able to break things. We're able to redefine them so they can't be broken in the first place." 59:25

On Automation

"The theorem provers didn't fail when they showed no leakage of information between contexts because the right bits went to the right places They just weren't being asked to prove these particular elements." 18:25

"All of our tools are incomplete. All of our tools are blind" 46:20

"Having kind of a fakey root environment seems weird, but it's kind of what we're doing with VMs, it's what we're doing with containers." 53:20

On Testing in the SDLC

"We do have cultural elements that block the integration of forward and reverse [engineering], and the primary thing we seem to do wrong is that we have aggressively separated development and testing, and it's biting us." 38:20

"[Re Penetration Testing]: Testing is the important part of that phrase. We are a specific branch of testers that gets on cooler stages...Testing shouldn't be split off, but it kinda has been." 38:50

Ctd. "Testing shouldn't be split off, but it kinda has to have been because people, when they write code, tend to see that code for what it's supposed to be. And as a tester, you're trying to see it for what it really is. These are two different things." 39:05

"[D]evelopers, who already have a problem psychologically of only seeing what their code is supposed do, are also isolated from all the software that would tell them [otherwise]. Anything that's too testy goes to the test people." 39:30

"[Re: PyAnnotate by @Dropbox] 'This is the thing you don't do. Only the developer is allowed to touch the code.' That is an unnecessary constraint." 43:25

"If I'm using an open source platform, why can't I see the source every time something crashes? ...show me the source code that's crashing...It's lovely." 47:20

"We should not be separating Development and Testing... Computers are capable of magic, and we're just trying to make them our magic..." 59:35

Misc

"Branch Prediction: because we didn't have the words Machine Learning yet. Prediction and learning, of course they're linked. Kind of obvious in retrospect." 27:55

"Usually when you give people who are just learning computing root access, the first thing they do is totally destroy their computer." 53:40 #DontHaveKids

"You can have a talent bar for users (N.B.: sliding scale of computer capability) or you can make it really easy to fix stuff." 55:10 #HelpDesk
"[Re: Ransomware] Why is it possible to have all our data deleted all at once? Who is this a feature for?!... We have too many people able to break stuff." 58:25

Sunday, June 10, 2018

Postman Masterclass Pt. 2

During my second Postman meetup as part of the Las Vegas Test Automation group, we were able to cover some of the more advanced features of Postman. It's a valuable tool for testing RESTful services (stronger opinions on that also exist), and they are piling on features so fast that it is hard to keep track. If you're a business trying to add automation, Postman is easily the lowest barrier to entry to doing so. And with a few tweaks (or another year of updates) it could probably solve most of your API testing.

The meetup covered the Documentation, Mock Server and Monitor functionality. These are pieces that can fit in your dev organization to smoothe adoption, unroadblock, and add automation with very little overhead. Particularly, the Mock servers they offer can break the dependency on third party integrations quite handily. This keeps Agile sprints moving in the face of outside roadblocks. The Monitors seem like a half-measure. They gave a GUI for setting up external monitors of your APIs, but you still need Jenkins and their Newman node package to do it within your dev env. The big caveat with each of these is that they are most powerful when bought in conjunction with the Postman Enterprise license.  Still, at $20 a head, it's far and away the least expensive offering on the market.

Since the meetup, I've found a few workarounds for the features I wish it had that aren't immediately accessible from the GUI. As we know in testing in general, there is no one-size fits all solution.  And the new features are nice, but they don't offer some of the basics I rely on to make my job easier.  Here is my ever-expanding list of add-ons and hidden things you might not know about.  Feel free to comment or message me with more:

Postman has data generation in requests through Dynamic Variables, but they're severely limited in functionality. Luckily, someone dockerized npm faker into a restful service. This is super easy to slip stream into your Postman Collections to create rich and real-enough test data. Just stand it up, query, save the results to global variables, and reuse them in your tests.

The integrated JavaScript libraries in the Postman Sandbox are worth a fresh look. The bulk of my work uses lodash, crypto libraries, and tools for validating and parsing JSON. This turns your simple requests to data validation and schema tracking wonders. 

  • Have a Swagger definition you don't trust? Throw it in the tv4 schema validator. 
  • Have a deep tree of objects you need to be able to navigate RESTfully? Slice and dice with lodash, pick objects at random, and throw it up into a monitor. Running it every ten minutes should get you down onto the nooks and crannies.
This article on bringing the big list of naughty strings (https://ambertests.com/2018/05/29/testing-with-naughty-strings-in-postman/amp/) is another fantastic way to fold in interesting data to otherwise static tests. The key is to ensure you investigate failures. To get the most value, you need good logs, and you need to pay attention to your results in your Monitors.

If you have even moderate coding skills among your testers, they can work magic on a Postman budget. If you were used to adding your own libraries in the Chrome App, beware: the move to a packaged app means you no longer have the flexibility to add that needed library on your own (faker, please?).

More to come as I hear of them.

Monday, March 5, 2018

Postman Master Class Pt. 1

I gave a talk on using Postman while testing. We covered the UI, creating a collection, working with environment variables, and chaining tests with JavaScript.

A big surprise from this talk was how few testers knew about Postman to begin with. When I first started testing websites, I wanted a more reliable way of submitting http requests. The ability to save requests got me out of notepad and command-line cURL. The move to microservices only made it more useful to me.

By far, the biggest discovery was how many testers there were that had never explored its signature features. Environments and scripting make the instrumentation of integration testing almost effortless. Organizations that want automation but don't want to give the time can turn simple tests into bare bones system tests for very little further expense.

I'm planning a Part 2 where I can talk about Newman, the command line collection runner. I also want to demonstrate the mocking and documentation features. If a company adopts their ecosystem, it has the potential to make a tester's life much easier.  Even if it's only a tester's tool, it can help them communicate better with developers and reach into the product with greater ease.

Slides