I have a site that uses iframe-resizer. After some code clean-up, every iframe on the app broke in seven different ways. Practically, this was the worst on pages that had infinite scroll or similar events triggered as the page moved. The resizer was triggering a scroll event which was triggering loading which was triggering more scrolling! To make matters worse, the scroll event handle was either non-existent or it was from jquery, and it was absolutely no help.
In the end, we had configured the attributes on the iframe tag incorrectly. The clean-up had caused them all to be null when compiled into the app, so they never got rendered properly. This didn't show up as null in the final HTML, and there were no helpful errors to guide us. It took a long time to root out.
As you modify a system with iframe-resizer and everything goes to hell, make sure any changes to the iframe tag attributes or configuration is actually getting compiled down properly still. It can save a world of headache.
I encountered an error when trying to run a C# command-line utility with `dotnet run`. The AWS package kept throwing an error, and nothing I did to try to fix it made it work. Here's the error:
Unhandled exception. System.TypeInitializationException: The type initializer for 'Amazon.Runtime.Internal.FallbackInternalConfigurationFactory' threw an exception.
---> System.IO.InvalidDataException: Line 14:<arn:aws:iam::{{AWS Acct ID}}:role/{{Role Name}}
> in file C:\Users\{{User Name}}\.aws\credentials does not contain a section, property or comment.
After digging into the environment vars on my Windows box, trying to set things in PowerShell, and unsetting whatever I could, a co-worker helped me take a second look at the error. My credentials file itself had a typo on line 14. I had chopped off the 'role_arn=' from in front of my developer creds at some time in the past, and this util was the first to try to load it. Once I fixed up the creds, it ran like a champ.
Preserving this here because googling that exact error didn't help me.
Three years ago in July, I completed Dan Boneh's online cryptography course with distinction through Coursera's Cryptography 1. Since then, I've had the opportunity to use and test cryptographic systems at work and for hobbies. Here are a few lessons learned when testing encryption.
I have found my fair share of bugs in the crypto we chose to use at work. I've gotten into a routine when testing encryption used for message authentication:
Test the same plaintext multiple times. Does it need to be different each time? How much of the MAC is different each time? It might help to explore the data your hashing function spits out as it can tell you how your hash function does what it does.
Replay it. How can a user abuse identical MAC'd data if they replay it at a later date? For a different user? Can you add items to the plaintext that will allow you to validate not only the data but the source or timeframe as well?
Ensure your hashes are detecting changes. Is your MAC rejected if you change the data at various places within the message?
Rotate the key. Do you need a hash to survive a key change? Usually you can just regenerate the data and re-MAC it, so figure out if you really need to use MACs over long lifetimes. They're easy to compute.
Generate a bunch at once. Is performance an issue with the service? Most hashes are built for speed, but is yours?
For each of these failure modes, I'm looking mostly for hints of weakness. I'm expecting pseudo-random noise, but how does my brain distinguish that from almost random noise?
There are many times when you need to generate a unique but random value but don't have the space to use a GUID. To evaluate if a solution will be "unique enough", check out the Birthday problem wikipedia page, and this table of probabilities in particular. Find out how many possible values exist (9 numeric digits = 10^9 ~= 2^30). Compare on the table with that value as the hash space size versus the number of times you'll be setting this value. This will tell you if the algorithm you want to use is sufficient. If you are making long-term IDs that can only be created once, you obviously want the probability of collision to be extremely low. If you can recover from a collision by creating a new transaction fairly readily, you might not need as much assurance. Ive used this to help drive a decision to increase unique token size from 13 to 40 characters, guide switching from SQL auto-numbers to random digits to hide transaction volumes, and ensure internal transaction IDs are unique enough to guide troubleshooting and reporting.
Time and again, the past three years have taught me that cryptography must be easy for it to be used widely. I've stayed with Signal for text messaging because it just works. I can invite friends and not be embarrassed at its user interface. It doesn't tick all the boxes (anonymity is an issue being a centralized solution), but it has enough features to be useful and few shortcomings. This is the key to widespread adoption of encryption for securing communications. Since Snowden revealed the extent of the NSA's data collection capability, sites everywhere have switched on HTTPS through Let's Encrypt. Learning more about each implementation of SSH and TLS in the course was both informative and daunting. I was anxious to get HTTPS enabled without rehosting the site on my own. Early 2018, Blogger added the ability to do just that through Let's Encrypt. It requires zero configuration once I toggle it on. I can't sing its praises enough. The content of this blog isn't exactly revolutionary, but this little move toward a private and authentic web helps us all.
Dan Boneh's Cryptography course continues to inform my testing. The core lesson still applies: "Never roll your own cryptography." And the second is how fragile these constructs are. Randomness is only random enough given the time constraints. Secure is only secure enough for this defined application. Every proof in the course is only as good as our understanding of the math, and every implementation is vulnerable at the hardware, software, and user layers. In spite of this, it continues to work because we test it and prove it hasn't broken yet. I'm looking forward to another three years of picking it apart.
Above is Dan Kaminsky's keynote at the inaugural DEF CON China. It was nominally about Spectre and Meltdown, and I thought it was immediately applicable to testing at all levels. Here are some moments that jumped out at me:
On Context:
"There's a problem where we talk about hacking in terms of only software...What does hacking look like when it has nothing to do with software." 1:55
"But let's keep digging." Throughout, but especially 5:40
"Actual physics encourages 60 frames per second. I did not expect to find anything close to this when I started digging into the number 60...This might be correct, this might not be. And that is a part of hacking too." 6:10
"Stay intellectually honest as go through these deep dives. Understand really you are operating from ignorance. That's actually your strong point. You don't know why the thing is doing what it is doing...Have some humility as you explore, but also explore." 7:40
"We really really do not like having microprocessor flaws...and so we make sure where the right bits come in, the right bits come out. Time has not been part of the equation...Security [re: Specter/Meltdown] has been made to depend on an undefined element. Context matters." 15:00
"Are two computers doing the same thing?...There is not a right answer to that. There is no one context. A huge amount of what we do in hacking...is we play contexts of one another." 17:50
[Re: Spectre and Meltdown] "These attackers changed time which in this context is not defined to exist...Fast and slow...means nothing to the chip but it means everything to the users, to the administrators, to the security models..." 21:00
"Look for things people think don't matter. Look for the flawed assumptions...between how people think the system works and how it actually does." 35:00
"People think bug finding is purely a technical task. It is not because you are playing with people's assumptions...Understand the source and you'll find the destination." 37:05
"Our hardest problems in Security require alignment between how we build systems, and how we verify them. And our best solutions in technology require understanding the past, how we got here." 59:50
On Faulty Assumptions:
"[Example of clocks running slow because power was not 60Hz] You could get cheap, and just use whatever is coming out of the wall, and assume it will never change. Just because you can doesn't mean you should...We'll just get it from the upstream." 4:15
"[Re: Spectre and Meltdown] We turned a stability boundary into a security boundary and hoped it would work. Spoiler alert: it did not work." 18:40
"We hope the design of our interesting architectures mean when we switch from one context to another, nothing is left over...[but] if you want two security domains, get two computers. You can do that. Computers are small now. [Extensive geeking out about tiny computers]" 23:10
"[RIM] made a really compelling argument that the iPhone was totally impossible, and their argument was incredibly compelling until the moment that Steve Jobs dropped an iPhone on the table..." 25:50
"If you don't care if your work affects the [other people working on the system], you're going to crash." 37:30
"What happens when you define your constraints incorrectly?... Vulnerabilities. ...At best, you get the wrong answer. Most commonly, you get undefined behavior which in the presence of hacking becomes redefinable behavior." 41:35
"It's important to realize that we are loosening the assumption that the developer knows what the system is supposed to do...Everyone who touches the computer is a little bit ignorant." 45:20
On Heuristics
"When you say the same thing, but you say it in a different time, sometimes you're not saying the same thing." 9:10
"Hackers are actually pretty well-behaved. When hackers crash code...it does really controlled things...changing smaller things from the computer's perspective that are bigger things from a human's perspective." 20:25
"Bugs aren't random because their sources aren't random." 35:25
"Hackers aren't modeling code...hackers are modeling the developers and thinking, 'What did [they] screw up?' [I would ask a team to] tell me how you think your system works...I would listen to what they didn't talk about. That was always where my first bugs came from." 35:45
On Bug Advocacy
"In twenty years...I have never seen stupid moralization fix anything...We're engineers. Sometimes things are going to fail." 10:30
"We have patched everything in case there's a security boundary. That doesn't actually mean there's a security boundary." 28:10
"Build your boundaries to what the actual security model is...Security that doesn't care about the rest of IT, is security that grows increasingly irrelevant." 33:20
"We're not, as hackers, able to break things. We're able to redefine them so they can't be broken in the first place." 59:25
On Automation
"The theorem provers didn't fail when they showed no leakage of information between contexts because the right bits went to the right places They just weren't being asked to prove these particular elements." 18:25
"All of our tools are incomplete. All of our tools are blind" 46:20
"Having kind of a fakey root environment seems weird, but it's kind of what we're doing with VMs, it's what we're doing with containers." 53:20
On Testing in the SDLC
"We do have cultural elements that block the integration of forward and reverse [engineering], and the primary thing we seem to do wrong is that we have aggressively separated development and testing, and it's biting us." 38:20
"[Re Penetration Testing]: Testing is the important part of that phrase. We are a specific branch of testers that gets on cooler stages...Testing shouldn't be split off, but it kinda has been." 38:50
Ctd. "Testing shouldn't be split off, but it kinda has to have been because people, when they write code, tend to see that code for what it's supposed to be. And as a tester, you're trying to see it for what it really is. These are two different things." 39:05
"[D]evelopers, who already have a problem psychologically of only seeing what their code is supposed do, are also isolated from all the software that would tell them [otherwise]. Anything that's too testy goes to the test people." 39:30
"[Re: PyAnnotate by @Dropbox] 'This is the thing you don't do. Only the developer is allowed to touch the code.' That is an unnecessary constraint." 43:25
"If I'm using an open source platform, why can't I see the source every time something crashes? ...show me the source code that's crashing...It's lovely." 47:20
"We should not be separating Development and Testing... Computers are capable of magic, and we're just trying to make them our magic..." 59:35
Misc
"Branch Prediction: because we didn't have the words Machine Learning yet. Prediction and learning, of course they're linked. Kind of obvious in retrospect." 27:55
"Usually when you give people who are just learning computing root access, the first thing they do is totally destroy their computer." 53:40 #DontHaveKids
"You can have a talent bar for users (N.B.: sliding scale of computer capability) or you can make it really easy to fix stuff." 55:10 #HelpDesk
"[Re: Ransomware] Why is it possible to have all our data deleted all at once? Who is this a feature for?!... We have too many people able to break stuff." 58:25
During my second Postman meetup as part of the Las Vegas Test Automation group, we were able to cover some of the more advanced features of Postman. It's a valuable tool for testing RESTful services (stronger opinions on that also exist), and they are piling on features so fast that it is hard to keep track. If you're a business trying to add automation, Postman is easily the lowest barrier to entry to doing so. And with a few tweaks (or another year of updates) it could probably solve most of your API testing.
The meetup covered the Documentation, Mock Server and Monitor functionality. These are pieces that can fit in your dev organization to smoothe adoption, unroadblock, and add automation with very little overhead. Particularly, the Mock servers they offer can break the dependency on third party integrations quite handily. This keeps Agile sprints moving in the face of outside roadblocks. The Monitors seem like a half-measure. They gave a GUI for setting up external monitors of your APIs, but you still need Jenkins and their Newman node package to do it within your dev env. The big caveat with each of these is that they are most powerful when bought in conjunction with the Postman Enterprise license. Still, at $20 a head, it's far and away the least expensive offering on the market.
Since the meetup, I've found a few workarounds for the features I wish it had that aren't immediately accessible from the GUI. As we know in testing in general, there is no one-size fits all solution. And the new features are nice, but they don't offer some of the basics I rely on to make my job easier. Here is my ever-expanding list of add-ons and hidden things you might not know about. Feel free to comment or message me with more:
Postman has data generation in requests through Dynamic Variables, but they're severely limited in functionality. Luckily, someone dockerized npm faker into a restful service. This is super easy to slip stream into your Postman Collections to create rich and real-enough test data. Just stand it up, query, save the results to global variables, and reuse them in your tests.
The integrated JavaScript libraries in the Postman Sandbox are worth a fresh look. The bulk of my work uses lodash, crypto libraries, and tools for validating and parsing JSON. This turns your simple requests to data validation and schema tracking wonders.
Have a Swagger definition you don't trust? Throw it in the tv4 schema validator.
Have a deep tree of objects you need to be able to navigate RESTfully? Slice and dice with lodash, pick objects at random, and throw it up into a monitor. Running it every ten minutes should get you down onto the nooks and crannies.
This article on bringing the big list of naughty strings (https://ambertests.com/2018/05/29/testing-with-naughty-strings-in-postman/amp/) is another fantastic way to fold in interesting data to otherwise static tests. The key is to ensure you investigate failures. To get the most value, you need good logs, and you need to pay attention to your results in your Monitors.
If you have even moderate coding skills among your testers, they can work magic on a Postman budget. If you were used to adding your own libraries in the Chrome App, beware: the move to a packaged app means you no longer have the flexibility to add that needed library on your own (faker, please?).
In Behat, I added a singleton to our contexts to store things across scenarios, but I ran into trouble when trying to keep separation between my tests. The storage object allowed me to be creative with builders, validators, and similar ways of reducing repetition and making the PHP code behind easier to read. There was a problem though: it would randomly be cleared in the middle of a test.
The only thing I knew was the object would get cleared at relatively the same time. I had a set of about 50 different tests in a single feature. This would call an API multiple times, run validations on the responses, and then move on to the next test. All the while, it would put information into the storage object. The test would not just fail in the middle of a scenario, it would generally fail near the same part of a scenario every time. it was timing, an async process, or something was clearing a logjam.
While designing the storage object, I had the bright idea to clear it with every scenario. The singleton acts like a global variable, and a clear after each one would ensure data from one test didn't pop up in another. To make sure i was running this at the last possible moment, I put the clear into the __destruct() method of my context class. By putting the clear in the destructor, I gave PHP permission to handle it as it saw fit. In reality, it sometimes left my scenario objects to linger while running the next (due to a memory leak or similar in Behat itself, or a problem in my code; I couldn't tell).
/**
* Destructor
*/
public function __destruct()
{
ApiContextStore::clear();
}
I first stopped clearing the store and the bugs went away. Whew! But how could I make sure I wasn't contaminating my tests with other data and sloppy design? I tried two things:
1) gc_collect_cycles() forces the garbage collector to run. This seems to have the same effect of stopping the crashes, but it was kind of a cryptic thing to do. I had to put it in the constructor of the Context rather than something that made more sense.
/**
* FeatureContext constructor.
*/
public function __construct()
{
/**
* Bootstrap The Store
*/
gc_collect_cycles();
ApiContextStore::create(); // Creates an instance if needed
}
2) Putting in an @AfterScenario test provided the same protection, but it ran, purposefully, after every test was complete. I'm not freeing memory with my clear, so relying on garbage collection wasn't a priority. I just needed it to run last.
I use Behat to create some ham-handed load scenarios fairly regularly. PhpStorm can help me spin up to get concurrency if I want a crushing load. But PhpStorm on Mac seems to disconnect from these running tests if I lock my computer or leave them to run too long. Killing them is a pain. I have to run `ps`, find the pid, then kill it.
When the list gets too long, I like to use awk to comb through the list and kill anything that is found. It's easy to search and parse out the tokens like so:
ps -ef | awk '/behat/ {system("kill -9 " $2)}'
ps -ef delivers a verbose list of processes. This is piped to awk where I can specify a command using awk's scripting language. In this command, I first search for 'behat'. Then I run a command pulls the second token, the process ID from each line of the result and inserts it into the `kill` command.
I finished reading @tarah's book Women in Tech. What better way to celebrate its paperback release than with a quick review.
Five years ago, I found my life turned inside out. People asked me deeply personal questions and questioned my basic competence. In the center of the maelstrom, I found comfort in a book with stories of people like me who were successful in spite of the difficulty. The stories were also paired with advice on how others has survived, thrived, and moved past the traumatic events.
I modified the Pololu RGB LED Strip drivers from version 1.2.0 to support Radio Shack's behind the times model that is 30 LEDs controlled in 3-diode sections. I had to swap the colors around to match this pinout, and I changed the struct to a class (because why not).
The fix was to physically reorder the declaration of red/gree/blue variables in the struct declaration. This way, when the information is written to the strip, it is sent in a different (and now correct) order. You can make the fix yourself by changing the file PololuLedStrip.h:
I should probably talk to Pololu on licensing concerns here. I found the license from the original driver and copied it into my repo. I couldn't figure out how to fork this properly, so I just re-uploaded it until I understand git a bit better.
This post is part of a series about building electro-mechnical PIN-cracking robots, R2B2 and C3BO.
This is a proof of concept for @JustinEngler's C3BO (https://github.com/justinengler/C3BO) using transistor controlled relays. It was prototyped by modifying Blink from the Arduino sample project.
In the video, You'll notice I've replaced the touchpad for your finger with a wire to the headphone jack's ground as the circuit ground. The two pieces of copper tape were no longer sticky enough to stay by themselves, so I am holding them down. They press two and 5 with about 8 key presses per second.
The work in progress shots from the Misc Electronics post are for this repository. Need to restore some changes lost after a kernel panic on my raspberry pi dev station and then it is a hop, skip and jump to release.
Initial test of this 20x4 character screen. Notice the Haiku
Small Screen Blues
Screens 20 by 4
Focus encoded messag
As haiku does
First fully custom project. Writing a 'guessing' game that uses a RadioShack RGB LED Strip, the screen, LEDs, and 6 buttons. Already maxed out the memory of the little chip on the Uno R3.
Moving my dev environment to Raspberry Pi. The borrowed laptop I was using is going to be repurposed and will live in an inaccessible place. Here is the Pi running the Arduino IDE.
Used Google, and knowledge from a class at SYN Shop, the local Hackerspace, to remove and troubleshoot this module. It is a Blower Motor Speed Controller from my car's AC. I found out the transistor in it is bad, but replacing it would take more effort than it is worth.