Monday, September 10. 2012
I read this Penny Arcade Comic the other day.
And it made me think. We do live in the science fiction world of yesterday. We have communicators, submarines, space ships, powerful computers. It's absolutely amazing.
So the question, I wonder, is "what's next?" Where is the new science fiction to push it to the next level. I'm not talking about things like warp drive, androids, or society falling apart (possibly due to warp drive or androids). I mean what's next? Back when folks like Jules Verne and H.G. Wells created their stories, they were absolutely crazy. Stuff no ordinary person would ever think of. Stuff ordinary people thought was so insane, nobody would ever do those things.
We've done a lot of those things.
So what now? What are the totally insane science fiction ideas no ordinary person would ever think of? I have no idea. Are there some books I need to read?
Friday, September 7. 2012
I ran across this article the other day Why Linux Will Never Suffer From Viruses Like Windows.
The article makes some pretty bold claims about Linux getting a virus. I admit, I'm quite skeptical of the conclusions the author makes. I do security work on Linux and I keep telling everyone "our day is coming". I won't complain if I'm wrong, but I suspect I'm not.
Here are my thoughts on the issue.
The reason Linux doesn't have viruses is because Linux doesn't have viruses
This basically means nobody is really writing them. Why not of course is up for debate, but even if they didn't propogate well, we'd at least see something out there. So far there's not much.
Your phone is bigger than the Linux Desktop
I expect every year for the next 50 or so to be "The Year of the Linux Desktop". What a lot of people don't get is the desktop is becoming less relevant than ever before, but Linux is more important than ever before. Watch out for viruses on your phone, that's the next place the bad guys are going to go. Except this time it's not going to be about telling all your friends you love them, it's going to be about stealing all your information and money, THEN telling all your friends how much you love them.
People are the problem
Fundamentally speaking, until we remove people from the equation (which is pretty hard to do and still turn a profit), we will have attacks. While some platforms do make attacking them easier than others, I'm fairly certain in at least 80% of instances it was a person making a bad decision that caused their computer to become infected. Technology moves faster than people can learn, what was safe today won't be safe tomorrow. We can't even imagine what the next attack will look like.
Will Linux have viruses? Maybe. Will Linux become a bigger target? Certainly. Can we do anything about it? Even if not, we're going to try. Buckle up, I suspect the next few years are going to be a wild ride.
Monday, February 27. 2012
I've had an OpenPGP smartcard for quite some time now. I recently acquired a Crypto Stick as well thanks to rcvalle. Using these things with GnuPG isn't very clear and took a lot of work to figure out. Rather than let others struggle through all this, I'll document the procedure here.
The basic idea behind using a smartcard to hold your GPG keys is that an attacker can't steal the key. The worst they could do is to sign or decrypt something while you have the card inserted.
Please take the time to understand what's going on here. I have lots of examples to make this all work, but be sure you understand what's happening. Public key cryptography is hard, you should at least have a basic grasp of what's happening here. This howto should be treated as an example, not a cookie cutter recipe to follow.
This is a rather long post, more after the fold.
Continue reading "How to use a cryptostick with Gnu Privacy Guard"
Sunday, February 26. 2012
Long long ago, Eris Raymond coined "Linus' Law".
given enough eyeballs, all bugs are shallow.
Last week Coverity released a report showing that open source software has a lower defect rate than proprietary software. This of course has some folks claiming that Linus' Law works!
Now I'm about as big of a fan of open source as they come, but I'm not sure if this is the proper course for cause and effect. I've done a lot of thinking about Linus' Law in the past few months as part of the Red Hat Product Security Team. What the Coverity report shows is that open source has fewer of the kind of defects Coverity can detect. That's really it.
On the topic of open source code quality and bugs though, I think there are a few more important things to consider.
1) The source code is available.
We've all written horrible horrible code when we know nobody will look at it. If I know someone will see my work, especially THE WHOLE WORLD, I'm going to spend a few extra minutes to make it look nice, which will help reduce bugs.
2) The original author is probably still around
One of the problems you can see with proprietary software is that the developers don't own the code. If a developer gets a new job, they'll probably never see it again. With open source, regardless of where you work, you're going to work on your projects. This "old knowledge" is a very powerful thing.
3) Anyone can help
If you report a bug to a proprietary vendor, they have to justify the fix from a business perspective. If the bug is obscure, or doesn't affect functionality, they may decide not to fix it. With open source, anyone can submit a patch. That means you can benefit from the long tail of contributions. The core 5% of people may write 95% of the software, but it's the other 95% of users and their 5% patches that can make the real difference. Those 5% patches are likely bugfixes, not new functionality.
4) The Distribution model is powerful
I worked for a company that write proprietary software once. The use and testing are very well defined. If a user found a bug because they were doing something weird, they were told to go away. With open source, there are hundreds of Distributions, all of them do things a little bit different. These corner cases improve overall code quality (a bug is a bug).
I suspect the overall message here isn't that Linus' Law works (it might, I'm not sure honestly). The message is that open source works. Why can't be pinned down to one thing, it's a lot of factors all coming together. Maybe all bugs ARE shallow with enough eyeballs. It's more likely though that the message should be "more eyeballs is better than less eyeballs".
What do you think?
Monday, January 9. 2012
So one of the really cool things about working at Red Hat is we get to actually talk about what we do. Openness is a really important ideal the company has. I'm currently working to create a security development program inside Red Hat, which is also going to have to be open source friendly. As this is interesting, hard, fun, hard, and useful, I'm going to do my best to keep the world filled in on how things are going. If someone else can benefit from my mistakes, I shall be very happy.
So after all the various things I've read regarding things like Microsoft's Security Development Lifecycle, BSIMM, CERT documentation, it's very clear that there is a lot of information regarding what a secure development program should look like. There is almost no information about actually starting one.
A big challenge with starting a secure development program is that you can't just show up and start telling people how to do their jobs. You'll probably last two days before someone beats you up in the parking garage. This is especially true when you take open source into consideration. If I tried to tell the folks at Apache how to develop, I'd likely be laughed out of the room (or mailing list in this instance). A program like this needs careful planning. You'll get one chance to make it work, if you blow it, nobody will ever listen to you again.
This is where a sound roadmap comes into play. While the overall look of my roadmap continues to change as I talk to people, the first step has to be training. If we don't train the involved parties, nobody will understand what's going on or why certain practices should be adopted. This includes training from the point of basic security, all the way to training on various security concepts such as secure design, development, training, and response.
Phase 1: Training
Security is hard, if nobody knows what you're talking about, you will fail. It's also dangerous to assume everyone understands security basics. Security is one of those areas where the more you learn the less you know. It's also changing constantly. What was sound security advice a few years ago may not be anymore. Various security technologies continue to evolve, and new ones are developed. New exploitation techniques are constantly changing. It's very hard to keep tabs on everything.
Our current initial focus on training is going to focus on these topics:
Security basics. This includes information on the security development lifecycle as a whole, and understanding what a security flaw is, why it's a problem, and some history. If we don't know where we've been, it's hard to see where we're going.
Secure design. If you don't have a secure design before you start to write the code, it may become impossible to fix some problems without a significant amount of work. Secure design from the start will include topics like threat modeling, defense in depth, secure defaults, and principal of least privilege.
Secure development. This topic is very broad and has the potential to get overly technical. A huge challenge here is how much information is enough? It's not going to be productive if you end up creating developers who are so worried about security issues that their productivity drops to zero.
Secure testing. An important part security development is to test what you create. There is never going to software that is totally free of security flaws. By doing certain testing though you can catch some things before it leaves the door. This is where using things like dynamic and static testing is helpful. Agan though, there has to be certain sensible limits. At some point the law of diminishing returns kicks in. We need to figure out where that is.
There are of course other areas for training, but the above are some of my initial goals. I also hope to make some portion of this training material available to the open source community. I'm not entirely sure how all that will happen long term, but it's something on the list.
As my plans continue to grow and evolve, I'll be certain to fill in more details.
Until next time.
Tuesday, January 3. 2012
So the year 2012 is finally here, the world is supposed to end this year (someday one of these predictions will be right). With every new year there are always a bunch of new exciting predictions about computer security. Most are wrong. If we knew what was going to happen, we'd stop it from happening, hopefully.
Rather than make a pointless prediction for 2012, I have a few goals for the year. As I announced previously, I'm working on a security effort inside Red Hat to bring proactive security measures to our products. The Red Hat Product Security Team if you will. Some of my current goals for the year (which are subject to change) stand as:
Security training materials
The team is working on various security training materials. Topics ranging from secure design, development, and testing. Some of this will stay internal to Red Hat, but I hope to make much of it public for the general open source community to leverage. I've done a fair amount of research on this topic. There is a lot of really quality material out there, but finding it can be difficult.
Security development principals for open source
There are numerous security development lifecycle programs out there, but none are geared for the unique challenges open source faces. I've yet to find a project that doesn't want to write secure code, or handle security flaws properly. I have found many that don't really know where to start though. While I've done a lot of investigating about existing security development programs, most lack the very crucial step of where and how to start. Stay tuned for updates in this area.
Investigate various security tools
There are a lot of interesting security tools out there. Things from static analysis, dynamic analysis, fuzzing, and testing. Some work well, some don't, some are hard to use. One of my goals is to sort this out, and make the findings available to anyone who is interested.
Work with open source projects
One of my biggest gripes about security efforts is they often work like this: "here's some information, good luck". Rather than just dump things like training materials and principals to the world, we need to work with projects who have an interest in this. One of the really hard, and really interesting aspects of open source is that every project is very different, and many use a lot of volunteers who aren't going to be receptive to the idea of a bunch of new process. I can't say I know how this one is going to end, but I have plenty of ideas where it can start.
In general I see 2012 as being very busy and exciting, which isn't a bad problem to have at all.
If you have any questions, or want to have a chat about any of this, feel free to mail me, firstname.lastname@example.org.
Monday, November 21. 2011
I'm rather excited to announce an expansion of Red Hat's product security efforts. I've been tasked with creating a team inside Red Hat to formalize our product security work. There is already a lot of really good work happening inside Red Hat in the security space. Technologies such as SELinux, ExecShield, secure development principals, and hardening in the toolchain have come a long way. However as happens with all decent sized companies, the left hand doesn't always know what the right hand is doing. Rather than letting good work go unnoticed, we're going to start formalizing some of these efforts to leverage what's being done, expand existing efforts into other product areas, and develop new programs.
Some additional efforts I would like to further are areas such as secure design principals, developer security training initiatives, secure coding practices, and security testing.
If you're interested in being a part of this effort, I have a number of open positions scattered around the world, feel free to apply directly or contact me if you have any questions. I'm quite happy to discuss location, so don't let that scare you off.
Please note these positions are no longer open. If you want to view open positions, please visit
Software Engineer - Security Best Practices Development
Software Engineer - Tool Development
Software Engineer - New Security Technologies Development
Software Engineer - Code Audit Development
I don't expect any of this to be easy, but nothing worth doing is ever easy. I expect many challenges and rewards to come from this. Red Hat is in a unique and great position to take on such a task. Stay tuned for more updates.
Wednesday, April 27. 2011
News is starting to make the rounds that Sony's PSN users had their personal info stolen nine days ago.
Sony: Your PSN Personal Info Was Stolen Nine Days Ago
Most people are now thinking NINE @#$%ING DAYS!!!
The wording Sony is using is fairly vague, nothing sounds concrete. This is most likely because they don't know for sure. I suspect it took them nine days to release this news because they spent the first five days running around in an utter panic waiving their hands in the air.
I'm not going to pick on Sony for being broken into, this happens. Even the best networks in the world have flaws. Nothing is perfect. Given how long it's taken them to respond, they probably didn't have a proper incident handling plan. It's easy to see security as a useless cost until you need it, then it looks pretty cheap.
Someday, you too will be compromised. What will you do when it happens?
Sunday, April 24. 2011
There has been a lot of noise lately about Apple and Google phones tracking people. This isn't very surprising honestly. Everything tracks what you do these days. Your web browser tracks the sites you visit. I would be amazed if more than half of your travel time isn't recorded on some sort of video security system (think about how many public and private video cameras you see, if you can see it, it can see you). Even when you spend money, it's being tracked. There are debates as to how anonymous cash is, for now, let's just presume it's not anonymous. Even the books you read are easier than ever to track thanks to ereaders (sure they know you bought The Catcher in the Rye, but now they know you read it once a month).
We live in a world where we have no privacy. This probably won't ever change since companies want to know this information. I'd be surprised if any single group has managed to put it all together yet, but there is a giant pile of gold waiting for whoever does (my current money is on Facebook, as long as someone doesn't swoop in and get it right before they're done floundering).
The real question is what can we do about it? There are really only three options. Go live in a shack in the woods and never ever spend money or use technology. Stop caring. Don't do silly things.
The vast majority of people live in the "Stop caring" option since they don't know any better. Living in the woods is probably out of the question fo most of us as something will eat us on day 3 if we haven't starved to death. The right answer is to not be silly.
Continue reading "If a phone tracks you in the forest, does it makes a noise?"
Sunday, April 3. 2011
I've finally gotten around to setting up a new GPG key for myself. It can be found on the keyservers, signed with my old key for those of you interested. The fingerprint is
CFB1 136C 6DD0 5BB9 D798 A78E 1CD8 ACDD BBE0 9A0F
The really cool thing about this key is I have it living on an OpenPGP smartcard. Such a card can be found from kernel concepts. This means that it's quite difficult for someone to steal this key from me. It will take a physical theft for someone to gain the key. The best a remote attacker can do is decrypt or sign a things as me while I have the card plugged into my computer.
As a warning, I wasn't able to generate my keys using the Omnikey or Gemalto USB keyreaders I have. I bought SIM sized smart cards so I can easily carry both the card and reader with me at all times. It turned out that GPG could generate the keys on Windows, so I ended up having to to do a clean windows install to generate the keys (which was promptly destroyed afterwards), it was a rather silly waste of time, but it did work.
Saturday, February 26. 2011
There is a really cool utility from the selinux folks called sandbox. It's lets you run an application inside a sandbox which has limited permissions on the system. The idea being that you could run an untrusted process which shouldn't be able to cause any real damage. I dare say these days the most untrusted process is a web browser. I know Chrome uses a technology similar to this where each tab gets its own sandbox, but I don't run Chrome, so my goal is to make Firefox as safe as possible. Plus I'm a paranoid nut, so this sort of thing I find really interesting.
The sanbox program is part of the policycoreutils-python package in Fedora. It has the unique feature of being able to run an X application inside the sandbox. This is done by using a Xephyr X server. Getting this to run Firefox the way I wanted took a bit of work, but it's quite handy now that I have it working.
The biggest advantage I now have are multiple browsers running as my user. I have one browser for general browsing. This browser I never enter a password into, as I presume some of the sites I visit could be malicious in nature.
My other browser is for trusted sites, like webmail and my bank. I'm able to run any number of browsers I wish, since each runs in its own sandbox, I don't have to worry about any resource collisions. If I have a questionable site to investigate (which happens in the security world fairly often), I just run another browser, check the site then close it. The sandbox cleans up any mess left behind when I'm done.
More after the fold.
Continue reading "Firefox in a sandbox with Fedora"
Monday, December 13. 2010
Last week there was an Exim 0day flaw found in the wild. This hasn't happened to something this widely used in quite a long time. It's worth pointing out that all the right folks came together to get this fixed in an amazing amount of time. They did a great job and deserve a lot of credit. This could have been a lot worse than it was.
Upstream sent this message giving a pretty good run down of events.
Their openness is certainly the best way to have handled this. If you treat security like a PR problem, it becomes a PR problem.
The short story is that on December 7, a vigilant sysadmin (Sergey Kononenko) noticed a compromised server, and luckily grabbed a dump of the data. It wasn't widely noticed for about two days. During this time investigation began. Here is where open source showed its real power. When the folks investigating were having issues, they started asking other community folks to help, this eventually made its way to various vendors. Everyone brought a different piece to the puzzle, and the next day the problem was understood, and vendors started to patch their copies of Exim. It turned out upstream had fixed the issue quite some time ago.
It's not uncommon for emergencies to go horribly wrong, but when the right people do the right things, things can work nicely.
Wednesday, August 11. 2010
Private browsing is not as secure as users think, says study
This shouldn't come as a surprise to anyone. Anytime you try to retrofit a new security model into an old one, you will break things, and sometimes it's just impossible to do it right. I suspect that most modern browsers will never be able to remove all possible traces of what you've been up to. There is a clever solution in Fedora 13 though. There is a tool called sandbox that Dan Walsh cooked up. I'll save the scary details, you can read Dan's blog for that.
The basic idea is that you can run a web browser inside of a sandbox, once you exit the sandbox, your files are all deleted. By using SELinux to confine the browser, you don't have to worry about an exploit breaking out of the sandbox. Since all the files are removed once you exit, there is no history left on the disk. Currently the sandbox only deletes the files written to disk, I filed a bug to shred them instead, which would prevent someone from inspecting the leftover bits on the disk.
The only trick that's no obvious, is you probably want to carry along your .mozilla directory for things like bookmarks and plugins. My sanbox browser command is
sandbox -t sandbox_web_t -i /home/bress/.mozilla -X firefox
It's not perfect and I don't use it for everything (yet), but I hope in the near future, all my browsing will happen this way.
Tuesday, August 10. 2010
Does hype hurt the world of security? Maybe, but probably not.
Black Hat convention hype hurts the enterprise risk management process
The author has one good point about security. Don't fall into the hype. It also has a number of silly points, my favorite being:
The security community must stop this hysterical response to vulnerability research. Security professionals must embrace more measured, logical and reasoned responses to new threats.
This isn't really true. The press needs to stop the hysterical response, vendors should fix their problems and have a reasonable story to tell their customers.
Most of these people are looking to make a name for themselves. The difference is that when these people cause a stir, it sounds scary.
It gets even more scary when you have an unresponsive or silly vendor who just stirs the pot. There are still a lot of vendors who treat security like a PR problem rather than a technical issue. Security flaws are bugs caused by programming mistakes. They need to be fixed, not approached as if they are a news story. If you fix the problem without much fanfare, there isn't much of a story. How many headlines have you read that are "Vendor fixes flaw in timely and reasonable manner!" Not many, it's way more fun to write about the vendor who refuse to fix a security flaw and insists the researcher is a bad bad person who lies and is bad.
Security flaws can be embarrassing for the affected party. Public disclosure, even sensational public disclosure is sometimes needed. These people often don't get paid directly for their work. Their pay is in reputation; they aren't going to complain if their flaw gets lots of hype.
Monday, August 9. 2010
Mozilla plans to automatically update Firefox 4, without asking the user anything:
Mozilla plans to silently update Firefox
There was once a time I would have thought this is bad. Not telling a user what's going on can't be good, right?
I think this is true for some users, but it's a minority of them. Most people don't understand what the update is for, or why they should get it. That means that some of them will click "no" when asked if they should update. In the rare event someone has a need to not take an update, they can choose to go down this dangerous path.
The obvious counter argument is "what if my vendor does something evil with their update!" If this is something you're worried about, you need a new vendor. If you can't trust your vendor, what's worse, a system that can be infected by an evildoer, or a system that IS infected by a dishonest vendor?
Automatic updates for security flaws are good, automatic updates for random vendor whims are not. I suspect that much of the fear of automatic updates comes from vendors trying to sneak in other changes. I would say if you don't trust your vendor, and they don't trust you, what's even the point?
(Page 1 of 13, totaling 190 entries) » next page
Syndicate This Blog