Friday, September 7. 2012
I ran across this article the other day Why Linux Will Never Suffer From Viruses Like Windows.
The article makes some pretty bold claims about Linux getting a virus. I admit, I'm quite skeptical of the conclusions the author makes. I do security work on Linux and I keep telling everyone "our day is coming". I won't complain if I'm wrong, but I suspect I'm not.
Here are my thoughts on the issue.
The reason Linux doesn't have viruses is because Linux doesn't have viruses
This basically means nobody is really writing them. Why not of course is up for debate, but even if they didn't propogate well, we'd at least see something out there. So far there's not much.
Your phone is bigger than the Linux Desktop
I expect every year for the next 50 or so to be "The Year of the Linux Desktop". What a lot of people don't get is the desktop is becoming less relevant than ever before, but Linux is more important than ever before. Watch out for viruses on your phone, that's the next place the bad guys are going to go. Except this time it's not going to be about telling all your friends you love them, it's going to be about stealing all your information and money, THEN telling all your friends how much you love them.
People are the problem
Fundamentally speaking, until we remove people from the equation (which is pretty hard to do and still turn a profit), we will have attacks. While some platforms do make attacking them easier than others, I'm fairly certain in at least 80% of instances it was a person making a bad decision that caused their computer to become infected. Technology moves faster than people can learn, what was safe today won't be safe tomorrow. We can't even imagine what the next attack will look like.
Will Linux have viruses? Maybe. Will Linux become a bigger target? Certainly. Can we do anything about it? Even if not, we're going to try. Buckle up, I suspect the next few years are going to be a wild ride.
Monday, February 27. 2012
I've had an OpenPGP smartcard for quite some time now. I recently acquired a Crypto Stick as well thanks to rcvalle. Using these things with GnuPG isn't very clear and took a lot of work to figure out. Rather than let others struggle through all this, I'll document the procedure here.
The basic idea behind using a smartcard to hold your GPG keys is that an attacker can't steal the key. The worst they could do is to sign or decrypt something while you have the card inserted.
Please take the time to understand what's going on here. I have lots of examples to make this all work, but be sure you understand what's happening. Public key cryptography is hard, you should at least have a basic grasp of what's happening here. This howto should be treated as an example, not a cookie cutter recipe to follow.
This is a rather long post, more after the fold.
Continue reading "How to use a cryptostick with Gnu Privacy Guard"
Sunday, February 26. 2012
Long long ago, Eris Raymond coined "Linus' Law".
given enough eyeballs, all bugs are shallow.
Last week Coverity released a report showing that open source software has a lower defect rate than proprietary software. This of course has some folks claiming that Linus' Law works!
Now I'm about as big of a fan of open source as they come, but I'm not sure if this is the proper course for cause and effect. I've done a lot of thinking about Linus' Law in the past few months as part of the Red Hat Product Security Team. What the Coverity report shows is that open source has fewer of the kind of defects Coverity can detect. That's really it.
On the topic of open source code quality and bugs though, I think there are a few more important things to consider.
1) The source code is available.
We've all written horrible horrible code when we know nobody will look at it. If I know someone will see my work, especially THE WHOLE WORLD, I'm going to spend a few extra minutes to make it look nice, which will help reduce bugs.
2) The original author is probably still around
One of the problems you can see with proprietary software is that the developers don't own the code. If a developer gets a new job, they'll probably never see it again. With open source, regardless of where you work, you're going to work on your projects. This "old knowledge" is a very powerful thing.
3) Anyone can help
If you report a bug to a proprietary vendor, they have to justify the fix from a business perspective. If the bug is obscure, or doesn't affect functionality, they may decide not to fix it. With open source, anyone can submit a patch. That means you can benefit from the long tail of contributions. The core 5% of people may write 95% of the software, but it's the other 95% of users and their 5% patches that can make the real difference. Those 5% patches are likely bugfixes, not new functionality.
4) The Distribution model is powerful
I worked for a company that write proprietary software once. The use and testing are very well defined. If a user found a bug because they were doing something weird, they were told to go away. With open source, there are hundreds of Distributions, all of them do things a little bit different. These corner cases improve overall code quality (a bug is a bug).
I suspect the overall message here isn't that Linus' Law works (it might, I'm not sure honestly). The message is that open source works. Why can't be pinned down to one thing, it's a lot of factors all coming together. Maybe all bugs ARE shallow with enough eyeballs. It's more likely though that the message should be "more eyeballs is better than less eyeballs".
What do you think?
Monday, November 21. 2011
I'm rather excited to announce an expansion of Red Hat's product security efforts. I've been tasked with creating a team inside Red Hat to formalize our product security work. There is already a lot of really good work happening inside Red Hat in the security space. Technologies such as SELinux, ExecShield, secure development principals, and hardening in the toolchain have come a long way. However as happens with all decent sized companies, the left hand doesn't always know what the right hand is doing. Rather than letting good work go unnoticed, we're going to start formalizing some of these efforts to leverage what's being done, expand existing efforts into other product areas, and develop new programs.
Some additional efforts I would like to further are areas such as secure design principals, developer security training initiatives, secure coding practices, and security testing.
If you're interested in being a part of this effort, I have a number of open positions scattered around the world, feel free to apply directly or contact me if you have any questions. I'm quite happy to discuss location, so don't let that scare you off.
Please note these positions are no longer open. If you want to view open positions, please visit
Software Engineer - Security Best Practices Development
Software Engineer - Tool Development
Software Engineer - New Security Technologies Development
Software Engineer - Code Audit Development
I don't expect any of this to be easy, but nothing worth doing is ever easy. I expect many challenges and rewards to come from this. Red Hat is in a unique and great position to take on such a task. Stay tuned for more updates.
Wednesday, April 27. 2011
News is starting to make the rounds that Sony's PSN users had their personal info stolen nine days ago.
Sony: Your PSN Personal Info Was Stolen Nine Days Ago
Most people are now thinking NINE @#$%ING DAYS!!!
The wording Sony is using is fairly vague, nothing sounds concrete. This is most likely because they don't know for sure. I suspect it took them nine days to release this news because they spent the first five days running around in an utter panic waiving their hands in the air.
I'm not going to pick on Sony for being broken into, this happens. Even the best networks in the world have flaws. Nothing is perfect. Given how long it's taken them to respond, they probably didn't have a proper incident handling plan. It's easy to see security as a useless cost until you need it, then it looks pretty cheap.
Someday, you too will be compromised. What will you do when it happens?
Sunday, April 24. 2011
There has been a lot of noise lately about Apple and Google phones tracking people. This isn't very surprising honestly. Everything tracks what you do these days. Your web browser tracks the sites you visit. I would be amazed if more than half of your travel time isn't recorded on some sort of video security system (think about how many public and private video cameras you see, if you can see it, it can see you). Even when you spend money, it's being tracked. There are debates as to how anonymous cash is, for now, let's just presume it's not anonymous. Even the books you read are easier than ever to track thanks to ereaders (sure they know you bought The Catcher in the Rye, but now they know you read it once a month).
We live in a world where we have no privacy. This probably won't ever change since companies want to know this information. I'd be surprised if any single group has managed to put it all together yet, but there is a giant pile of gold waiting for whoever does (my current money is on Facebook, as long as someone doesn't swoop in and get it right before they're done floundering).
The real question is what can we do about it? There are really only three options. Go live in a shack in the woods and never ever spend money or use technology. Stop caring. Don't do silly things.
The vast majority of people live in the "Stop caring" option since they don't know any better. Living in the woods is probably out of the question fo most of us as something will eat us on day 3 if we haven't starved to death. The right answer is to not be silly.
Continue reading "If a phone tracks you in the forest, does it makes a noise?"
Sunday, April 3. 2011
I've finally gotten around to setting up a new GPG key for myself. It can be found on the keyservers, signed with my old key for those of you interested. The fingerprint is
CFB1 136C 6DD0 5BB9 D798 A78E 1CD8 ACDD BBE0 9A0F
The really cool thing about this key is I have it living on an OpenPGP smartcard. Such a card can be found from kernel concepts. This means that it's quite difficult for someone to steal this key from me. It will take a physical theft for someone to gain the key. The best a remote attacker can do is decrypt or sign a things as me while I have the card plugged into my computer.
As a warning, I wasn't able to generate my keys using the Omnikey or Gemalto USB keyreaders I have. I bought SIM sized smart cards so I can easily carry both the card and reader with me at all times. It turned out that GPG could generate the keys on Windows, so I ended up having to to do a clean windows install to generate the keys (which was promptly destroyed afterwards), it was a rather silly waste of time, but it did work.
Saturday, February 26. 2011
There is a really cool utility from the selinux folks called sandbox. It's lets you run an application inside a sandbox which has limited permissions on the system. The idea being that you could run an untrusted process which shouldn't be able to cause any real damage. I dare say these days the most untrusted process is a web browser. I know Chrome uses a technology similar to this where each tab gets its own sandbox, but I don't run Chrome, so my goal is to make Firefox as safe as possible. Plus I'm a paranoid nut, so this sort of thing I find really interesting.
The sanbox program is part of the policycoreutils-python package in Fedora. It has the unique feature of being able to run an X application inside the sandbox. This is done by using a Xephyr X server. Getting this to run Firefox the way I wanted took a bit of work, but it's quite handy now that I have it working.
The biggest advantage I now have are multiple browsers running as my user. I have one browser for general browsing. This browser I never enter a password into, as I presume some of the sites I visit could be malicious in nature.
My other browser is for trusted sites, like webmail and my bank. I'm able to run any number of browsers I wish, since each runs in its own sandbox, I don't have to worry about any resource collisions. If I have a questionable site to investigate (which happens in the security world fairly often), I just run another browser, check the site then close it. The sandbox cleans up any mess left behind when I'm done.
More after the fold.
Continue reading "Firefox in a sandbox with Fedora"
Monday, December 13. 2010
Last week there was an Exim 0day flaw found in the wild. This hasn't happened to something this widely used in quite a long time. It's worth pointing out that all the right folks came together to get this fixed in an amazing amount of time. They did a great job and deserve a lot of credit. This could have been a lot worse than it was.
Upstream sent this message giving a pretty good run down of events.
Their openness is certainly the best way to have handled this. If you treat security like a PR problem, it becomes a PR problem.
The short story is that on December 7, a vigilant sysadmin (Sergey Kononenko) noticed a compromised server, and luckily grabbed a dump of the data. It wasn't widely noticed for about two days. During this time investigation began. Here is where open source showed its real power. When the folks investigating were having issues, they started asking other community folks to help, this eventually made its way to various vendors. Everyone brought a different piece to the puzzle, and the next day the problem was understood, and vendors started to patch their copies of Exim. It turned out upstream had fixed the issue quite some time ago.
It's not uncommon for emergencies to go horribly wrong, but when the right people do the right things, things can work nicely.
Wednesday, August 11. 2010
Private browsing is not as secure as users think, says study
This shouldn't come as a surprise to anyone. Anytime you try to retrofit a new security model into an old one, you will break things, and sometimes it's just impossible to do it right. I suspect that most modern browsers will never be able to remove all possible traces of what you've been up to. There is a clever solution in Fedora 13 though. There is a tool called sandbox that Dan Walsh cooked up. I'll save the scary details, you can read Dan's blog for that.
The basic idea is that you can run a web browser inside of a sandbox, once you exit the sandbox, your files are all deleted. By using SELinux to confine the browser, you don't have to worry about an exploit breaking out of the sandbox. Since all the files are removed once you exit, there is no history left on the disk. Currently the sandbox only deletes the files written to disk, I filed a bug to shred them instead, which would prevent someone from inspecting the leftover bits on the disk.
The only trick that's no obvious, is you probably want to carry along your .mozilla directory for things like bookmarks and plugins. My sanbox browser command is
sandbox -t sandbox_web_t -i /home/bress/.mozilla -X firefox
It's not perfect and I don't use it for everything (yet), but I hope in the near future, all my browsing will happen this way.
Tuesday, August 10. 2010
Does hype hurt the world of security? Maybe, but probably not.
Black Hat convention hype hurts the enterprise risk management process
The author has one good point about security. Don't fall into the hype. It also has a number of silly points, my favorite being:
The security community must stop this hysterical response to vulnerability research. Security professionals must embrace more measured, logical and reasoned responses to new threats.
This isn't really true. The press needs to stop the hysterical response, vendors should fix their problems and have a reasonable story to tell their customers.
Most of these people are looking to make a name for themselves. The difference is that when these people cause a stir, it sounds scary.
It gets even more scary when you have an unresponsive or silly vendor who just stirs the pot. There are still a lot of vendors who treat security like a PR problem rather than a technical issue. Security flaws are bugs caused by programming mistakes. They need to be fixed, not approached as if they are a news story. If you fix the problem without much fanfare, there isn't much of a story. How many headlines have you read that are "Vendor fixes flaw in timely and reasonable manner!" Not many, it's way more fun to write about the vendor who refuse to fix a security flaw and insists the researcher is a bad bad person who lies and is bad.
Security flaws can be embarrassing for the affected party. Public disclosure, even sensational public disclosure is sometimes needed. These people often don't get paid directly for their work. Their pay is in reputation; they aren't going to complain if their flaw gets lots of hype.
Monday, August 9. 2010
Mozilla plans to automatically update Firefox 4, without asking the user anything:
Mozilla plans to silently update Firefox
There was once a time I would have thought this is bad. Not telling a user what's going on can't be good, right?
I think this is true for some users, but it's a minority of them. Most people don't understand what the update is for, or why they should get it. That means that some of them will click "no" when asked if they should update. In the rare event someone has a need to not take an update, they can choose to go down this dangerous path.
The obvious counter argument is "what if my vendor does something evil with their update!" If this is something you're worried about, you need a new vendor. If you can't trust your vendor, what's worse, a system that can be infected by an evildoer, or a system that IS infected by a dishonest vendor?
Automatic updates for security flaws are good, automatic updates for random vendor whims are not. I suspect that much of the fear of automatic updates comes from vendors trying to sneak in other changes. I would say if you don't trust your vendor, and they don't trust you, what's even the point?
Thursday, August 5. 2010
It seems some folks are indeed storing the body scan images everyone said would never be stored:
Feds admit storing checkpoint body scan images
This probably shouldn't surprise many people. The issue isn't so much that these guys are trying to be evil, but are protecting themselves. Let's say a bad guy gets through the machine. If there is no record of the scan, you have an instant scapegoat; the security screener clearly didn't do their job! If you've saved it though, you can point at it and say "Look, the scan was fine, not my fault!"
People generally don't like to be blamed for anything. When given the choice between what is right, and what could keep them of trouble, most people will opt to stay out of trouble. It's part of our monkey brains at work, we don't like to be in trouble.
Storing these images will probably never change, as it's really easy to convince most people we need these. When someone says "it makes us safer", it's hard to create an argument a normal person will understand. Normal people can relate to wanting to stop bad people from doing bad things (especially to themselves). What they can't relate to is how the people in charge can systematically remove freedoms over a very long period of time, slowly, so most don't notice. Sadly the closest thing to an argument most people grasp about these machines is that they don't want pictures of their naked bodies stored.
Thursday, May 6. 2010
Wow, it's been a long time since I've updated this thing. Hopefully I'll be less busy in the future.
Bruce Schneier had a really interesting story:
Nobody Encrypts their Phone Calls
I'm not very surprised by this. Encrypting phone calls is hard (even if you know what you're doing), which I suspect puts folks into two buckets
1) People who don't understand their phone calls can and probably are being recorded
2) People smart enough to not use the phone
The really scary thing though is this quote:
In 2009, encryption was encountered during one state wiretap, but did not prevent officials from obtaining the plain text of the communications,
The question to think about now, is how did they get a transcript? (I suspect it was from a different remote listening device that wasn't a telephone)
Wednesday, February 17. 2010
A friend passed along this blog posting from Microsoft:
Microsoft’s Many Eyeballs and the Security Development Lifecycle
The article is full of half truths and assumptions, but my favorite bit is probably this:
A million monkeys banging on a million keyboards will eventually produce Twelfth Night. Mathematically, the many-eyeballs argument, and the million-monkeys argument are equivalent.
I shall applaud Microsoft for comparing Open Source developers to monkeys randomly banging on keyboards. They've compared us to lots of things in the past, I'm not sure if this is a step up, or just lateral, either way, I'm happy to claim my infinite monkey status. It would be grand if one of you creative types could come up with a clever logo for our new social status.
In all seriousness though, the article makes a number of claims, I'll try to cover the big ones here:
1) Code review makes software more secure
2) Many Eyeballs is not true
3) Nobody is auditing Open Source software
4) The Microsoft Security Development Lifecycle, or SDL, is swell
1) Code review makes software more secure
I don't think anyone can argue against this. The author then goes on to pick out a handful of quotes about how Open Source doesn't get bugs fixed. I'm not even sure how one can come to this conclusion, so let's look at the facts presented instead: None. So in conclusion ... wait, what? The thing that makes an article like this hard to accept without any actual meat, is that it's all out there. Open Source development happens in the public. You don't get to hide bugs under the rug, you can't pretend development is or isn't happening. If the author had interest in backing up this claim, he could have picked a major project and proved his point.
Rather than make some crap up here about how secure Open Source is, I'll defer the reader to this link:
That's Red Hat's security metric data. If you don't believe it, you can figure it out for yourself by reviewing the open source development data. Debian has something similar here. If what the author claims to be true that Open Source doesn't get any code reviews, or bugs fixed, it would be in a seriously sorry state. Keep in mind that there are a number of very widely used applications out there, Firefox, Apache, the Linux Kernel, and JBoss to name a few. If these applications were in near the sorry state presented in the article, nobody would run them, but they're quite widely used.
I think reality wins this one.
2) Many Eyeballs is not true
3) Nobody is auditing Open Source software
These two ideas are similar in their nature. Are they true? Who knows. I'm certainly not an expert and I suspect without a rather expensive study, we'll never know for sure. Here is what I do know. Microsoft has a finite number of employees. The number of Open Source developers eclipses this by magnitudes. This of course doesn't mean that they're doing audits, or even fixing bugs, but the numbers can't be ignored. Even if only one or two percent of Open Source developers are reviewing code, it's still a huge number. Unfortunately doing that sort of work isn't glamorous, so the folks doing it get little credit or attention.
My example is the Fedora Bug Zappers. These are folks that look at bug reports from Fedora. They're not well known, nor do they get lots of credit, but the job they do is amazing. Let's do a thought experiment here to show how bug reports are like echo chambers: they get noticed. If you report a bug in Fedora, one of the Bug Zappers will find it and take a look. If it's a security flaw, they'll pass it along to the security minded folks. Right here we've gone from one person, filing one bug, has just alerted about ten people (probably more, but let's be conservative) to the bug. Us security types will take a look, and if it's real, pass it along to all the other interested Open Source distributions, and upstream. We're now easily over 100 people. All of those people will collaborate to some degree to develop a patch. Upstream will accept the patch, distributions will patch their packages, and the users will upgrade. Once all this is done, you're talking about many hundreds of people involved in this one bug report. This is where the idea of many eyeballs comes from. It's not about people just auditing code, it's about the community working together to improve the software.
The thing we need to be mindful of here, is that Eric Raymond may be wrong with Linus' Law. It's not so much the why as it is the what. It's easy to claim that many eyeballs isn't true, but one cannot discount the fact that Open Source development works, and it works well. Why it works so well is no doubt an issue that can be debated at length. Perhaps he real power is in our infinite supply of monkeys banging on keyboards
4) The Microsoft Security Development Lifecycle, or SDL, is swell
I agree with this 100%. It is swell, and it's a great idea. It would be wonderful if every Open Source project started doing this.
I would be interested in knowing how many flaw are stopped as a result of SDL. I've not seen any metrics or papers on the effectiveness of this program. I'm certain it's stopped some number of flaws. If someone knows of such data, please pass it along. I won't comment further, as my understanding of this isn't all that deep, but from what I gather it is a good idea, and I applaud Microsoft for implementing this.
The original article I'm mostly disagreeing with here concludes with the usual old data that Microsoft releases fewer security advisories than Open Source does. This is of course a red herring meant to distract the reader. They've been caught multiple times only releasing one advisory for multiple flaws. With closed source, there isn't a good way to tell what's all getting fixed. In Open Source, we can't hide anything, it's all there. This keeps us honest. I could go on with this argument at length, but it's not really worth it. It's been picked apart every which way for years now, and I don't think anyone really cares anymore. At the end of the day, all that matters is keeping the end users safe. If some friendly competition helps do this, we all win.
(Page 1 of 11, totaling 153 entries) » next page
Syndicate This Blog