Portuguese English German

Most Secure Applications of B.I.B.I.F.I. Contest

Build it, Break it, Fix it (BIBIFI). This is the name of the Maryland Cybersecurity Center's contest available for the Cybersecurity Specialization Capstone Project on Coursera, for those who successfully passed the Usable Security, Software Security, Cryptography, and Hardware Security modules.

In this post, I'd like to share with you the lessons learned during this contest and how we achieved one of the most secure applications after the end of the Break it phase. This is very interesting because, more than ever, I was the developer, even having a security background, and I built an application against security professionals. Some conflicts started to become crystal clear to me after this contest when it comes to development versus information security. I will start by explaining the contest itself and then move on to the details of each phase.

Contest

In this contest, participants go through 3 rounds:

  • Build it: to build applications according to the contest's specifications.
  • Break it: to break applications from other teams to score points. Breaks can be pointed out for correctness bugs or security bugs (exploits).
  • Fix it: to fix bugs found during the Break it phase.

Before any round begins, people have to form a team, from 1 to 5 people. So, everybody published their skills and sought candidates who could help by either coding in the Build it phase or breaking code in the Break it phase. In the end, more than 100 teams from all over the world were formed.

This is where I met @zeafonso, @tche, @apolishc, and @lucas. We called ourselves "CyberMarmitex." The word "marmita" is a Brazilian synonym for a lunch box. So, CyberMarmitex doesn't make any sense. Don't worry about that, just keep reading hehe.

Build It

One thing I can say for sure is that everybody embraced the Rugged Software Manifesto right off the bat. We all knew that we were going to be attacked 2 weeks after the beginning of the Build it phase. Still, was it even possible for us to be compromised since we all knew that attacks were on their way? That's what I thought. After completing a security specialization and understanding the importance of information security, I believed that few vulnerabilities would be found. But boy, I was wrong.

During this phase, we needed to develop 2 command-line programs: a client and a server. Any language could be used. I won't go into too much detail to avoid spoiling the excitement in case you want to try it yourself. Build points were given for correctness, i.e., passing unit tests, and performance.

It took a lot of time. I coded 90% of both programs in Ruby, and we implemented the basic validations to pass correctness tests, which included some security verifications, e.g., invalid inputs, but not all of them. @apolishc made some code improvements, implemented replay protection on ATM, and helped us pass some tests as well.

Code wasn't everything. We had a threat model made by @lucas. @zeafonso and @tche were primarily responsible for building a fuzzer to get us ahead of other teams in the Break it phase. However, things changed too much in the middle of the week, and we didn't end up using the fuzzer in the end.

Near the deadline to submit our code, I was somewhat crazy. I implemented encryption from the client to the server but not from the server to the client. @zeafonso pointed this out, along with the replay attack protection and the correct AES mode to prevent oracle padding attacks.

Skipping the building details, the takeaway from this phase for me was that development and information security need to be handled by 2 different people, even if the developer has an information security background. The reason for this is that, in the end, when the deadline is near, the developer focuses on making the application work more than anything else. And it's not wrong for the developer to think that way. It's their ultimate responsibility. Furthermore, security for a broken app doesn't have any value at all.

Consequently, stupid things happen, like forgetting to encrypt from the server to the client. It's funny. So, even in your company, in your team, if you have someone who is a good software engineer and has a security background, make sure to find one more security person to support them. Otherwise, sloppy security controls may take the place of effective security controls.

Break It

The war has begun. A lot of teams started scoring points from correctness bugs, like trying variations of parameters allowed for the client. Some were very dumb, e.g., using an invalid port like 9999999, and some were clever, e.g., the parameter "-a <account>" must accept the characters [a-zA-Z\-], so a valid value could be "-b". Trying "-a -b" usually caused programs to crash, but they should work normally accepting "-b" as the value of "-a," which was rare.

Our team suffered from correctness bugs, but then security bugs (exploits) started to show up in many teams a few days later. Security bugs could be categorized as confidentiality bugs and integrity bugs. If information between the client and server could be identified during a man-in-the-middle (MITM) attack, that's a confidentiality exploit. If some information could be manipulated or tampered with during a MITM attack, that would be an integrity exploit.

The majority of teams fell short on replay attacks, both from client to server and from server to client. It was hard for teams to remember to implement this protection manually. The choice of the cryptographic algorithm was already hard enough.

To prevent replay attacks, we used a random number generator to generate a unique string to be sent to the server. The server would store this string to prevent double processing and also return this string back to the client to let the client know from which request it was getting a response. This solution solved replay attacks from both sides.

Even after all the protections we put in place, one team, "b01lers," found a vulnerability in our application. The only security bug we had. We couldn't know before the Break it round ended, so we started to guess, and the best possible solution was brute force attacks to guess some parameter passed from the client to the server or from the server to the client. We were right, but we could have been more specific.

B01lers took advantage of the lack of padding in our ciphertext. Error messages were shorter, and successful messages were longer. Based on that, they used brute force to guess some parameters. That was clever. To fix that, we had to add some padding to make all ciphertexts have the same length. However, there is a trick to it. If we fix some value for the padding, e.g., 50, and the client can have a parameter of a large size, e.g., 50 too, if the attacker forces the parameter with a huge value, the padding won't be effective. So, there is some math that needs to be done before setting the padding length. This is one of the practical details that we, as security consultants, tend to overlook most of the time. "Hey, put the damn padding," you'd say. And in the end, the padding wasn't implemented well.

There were 3 other teams that had no reported vulnerabilities at all, as they added padding to prevent side-channel attacks based on ciphertext length. However, they were vulnerable to side-channel attacks based on time, as I saw in their code, although no one wrote exploits for them.

The maximum number of exploits for an application was 26. So, the answer to my question, asked during the Build it phase, is a HUGE YES. Even in 2015, after security awareness, teams were compromised very badly. Imagine developers WITHOUT a security specialization. The landscape is fuc**** dirty. Holy cow...

Fix It

There is not much to say here. We had to fix bugs and resolve disputes about bugs that weren't actually bugs. So, we solved 60%, including the b01lers exploit and correctness bugs (basically fixing regex expressions), with support from @tche and @lucas's spreadsheet to organize all the bugs. We disputed the other 40%.

@tche also helped a lot during the Break it phase to earn points for our team. He basically carried the entire team on his back during that phase.

Anyway, we had good teamwork, despite having to deal with our day-to-day work while doing this capstone project. I confess that when we got 5 members, I thought that the point of having a team was to have 5 members and hope that at least 3 of them would work well. But, in the end, everybody worked, and the team proved itself strong enough to earn the "most secure application" award.

Actually, this award of being one of the most secure applications is self-given. There is nothing official from BIBIFI about this. In fact, the contest counted performance, correctness bugs, and security bugs to select the best builders. As correctness bugs affected us a bit, those who had more exploits but fewer correctness bugs, even with the exploits having more impact, could get a better classification. We also wrote in Ruby, which wasn't the most performant language at all.

Personally, I don't mind much, even knowing that a good software is Reliable (Correctness), Resilient (Withstand Attacks), and Recoverable (Rapidly responding to exploits). The challenge for me was the security attacks. The ability to withstand attacks from professionals around the globe was the most exciting thing for me.

I'd like to congratulate CyberMarmitex, @apolishc, and @lucas for reviewing the English of this post, the BIBIFI contest and its sponsors, and hope that we could change this landscape in secure development, which is very, very ugly. To find that number of security bugs at the end of a security specialization and not be disappointed is very hard.

So, if you develop applications, go learn security, spread the knowledge, and make the internet a safer place, please.

Thank you.

* Note: In case you're wondering if, after reading this post, you'd perform better in case you participate in the next BIBIFI, I'd like to point out that this post, of course, helps, but it helps as much as any client/server security article. There are no answers here, no specs, no source code. We (all teams) knew that we needed a secure application, and all those controls were explained in the previous courses that we needed to pass. So, one way to see this article is as a bunch of notes that I've taken. Still, knowing the controls represents 50% of what needs to be done. There is a huge gap between pointing out a security control and implementing it. I discuss such controls in this blog separately, not related to BIBIFI, as you can see on the homepage.

Share on Twitter Share on Facebook Share on LinkedIn Share on Hacker News

Popular Posts

Newsletter