Monday, 29 November 2010

Security Weekly News 29 November 2010 - Summary

Feedback and/or contributions to make this better are appreciated and welcome

Highlighted quotes of the week:

"Real security is built, not bought." - Richard Bejtlich

"Can't believe in 2010 many web devs still tell prospective client that security is additional cost, add-on or on request only." - Drazen Drazic

"If you try to limit access to complex services by running another complex service, you're only changing, not reducing, your exposure." - Moxie Marlinspike

"McAfee: 60% of the Top Google search terms return malicious sites in the top 100 results" - McAfee at DeepSec

"I haven't slept, showered, or seen sunlight for 48 hours. Just took the OSCE exam from Offensive-Security, best certification out there." - Dave Kennedy

"Heartbreaking decision to make. subject my children to the naked body scan or patdown. just want to be together on thanksgiving" - Jeremiah Grossman

To view the full security news for this week please click here (Divided in categories, There is a category index at the top): The categories this week include (please click to go directly to what you care about): Hacking Incidents / Cybercrime, Unpatched Vulnerabilities, Software Updates, Business Case For Security, Web Technologies, Network Security, Cloud Security, Mobile Security, Privacy, Cryptography / Encryption, Social Engineering, Tools, General, Funny

Highlighted news items of the week (No categories):

Not patched: Exploit code for still unpatched 0-day used by Stuxnet released, Elevation of privileges under Windows Vista/7 (UAC Bypass) - Proof of Concept, Privilege escalation 0-day in almost all Windows versions, Android vulnerability permits data theft

Patched: Adobe Reader X released with Windows sandbox, Apple closes 23 critical holes in Safari, NetBSD 5.1 feature update arrives, Wine 1.3.8 released

A few weeks ago at OWASP AppSec DC we made progress on an idea that several of us (@RafalLos, @secureideas, @securityninja, @TheCustOS) have been talking
about on twitter for a while. The idea is based on trying to determine a good solution to what we see as the general brokenness of the Internet's web
applications. Not only do we see current applications as badly broken but the velocity at which developers are building new insecure web application is
increasing. The panel that we hosted at OWASP AppSec DC discussed one method which we can contribute to reduce the rate at which new, insecure web
applications are being developed.
Our idea is based on improving the security of existing web application development frameworks; adding security components into their core, thus making
security more transparent to the developer and potentially having the effect of producing more secure web applications.

No matter what solutions you look at to help secure your network you need to ensure that whatever ones you select do not undermine your
existing security or introduce new vulnerabilities o r problems. This is true no matter if that solution is proprietary software, open source based, an
appliance or indeed a service.
The problem many face when selecting solutions is that vendors will tell you all about the strengths of their product, how many awards it has wonand show
you the glowing reviews it has received in various magazines. Not to mention the FUD (Fear Uncertainty and Doubt) factor that they rely heavily on and will
push anytime they think you may be wavering.
If you want to get beyond the hype and ensure that the company you are dealing with do indeed understand security and have a secure product then the
following are some questions that I have found to work;


Vulnerability Assessment
Customer Maturity Level: Low to Medium. Usually requested by customers who already know they have issues, and need help getting started.
Goal: Attain a prioritized list of vulnerabilities in the environment so that remediation can occur.
Focus: Breadth over depth.
Penetration Test
Customer Maturity Level: High. The client believes their defenses to be strong, and wants to test that assertion.
Goal: Determine whether a mature security posture can withstand an intrusion attempt from an advanced attacker.
Focus: Depth over breadth.

Protecting your business against the latest Web threats has become an incredibly complicated task.The consequences of external attacks, internal security
breaches, and internet abuse have placed security high on the small business agenda-so what do you need to know about security and what are the key elements
to address? Trend Micro sheds some light on this tricky subject.
This paper addresses the top 10 ways to protect your small business against web threats including:
1. Close your doors to malware
2. Write your policy
3. Tackle social media before it trips you up
4. Protect with passwords
5. Get critical about Internet security
6. Ask employees for help
7. Make reseller/consultant relationship work for you
8. Lead by example
9. Be current
10. Choose a security partner, not just a vendor
Read this white paper to learn more about protecting your small business against web threats.

Cloud Security highlights of the week

Cloud providers' terms and conditions shock study
Cloud computing contracts often contain significant business risks for end user organisations, according to independent research by UK academics. Some
contracts even have clauses disclaiming responsibility for keeping the user's data secure or intact.
Others reserve the right to terminate accounts for apparent lack of use, which is potentially important if they are used for occasional backup or disaster
recovery purposes, according to the Cloud Legal Project at Queen Mary, University of London.
Other contracts can be revoked for violation of the provider's Acceptable Use Policy, or indeed for any or no reason at all, the academics found.

Let's Enable Cloud Computing []
I've been thinking a lot about 'cloud computing' over the past few months, and I keep coming back to the same conclusion every time: the InfoSec community
is inhibiting IT innovation by throwing up weak, largely unsubstantiated concerns over the security risks of 'cloud computing.' Overall, our industry's
reaction smacks of 'fear of the unknown.' [1]
After some research[2][3][4][others], I've found that most security-related arguments against cloud computing qualitatively fall into one of the following
risks, in no particular order

Secure Network Administration highlights of the week (please remember this and more news related to this category can be found here: Network Security):

OpenSSH is a FREE version of the SSH connectivity tools that technical users of the Internet rely on. Users of telnet, rlogin, and ftp may not realize that
their password is transmitted across the Internet unencrypted, but it is. OpenSSH encrypts all traffic (including passwords) to effectively eliminate
eavesdropping, connection hijacking, and other attacks. Additionally, OpenSSH provides secure tunneling capabilities and several authentication methods, and
supports all SSH protocol versions.
SSH is an awesome powerful tool, there are unlimited possibility when it comes to SSH, heres the top Voted SSH commands

AWK is a data driven programming language designed for processing text-based data, either in files or data streams. It is an example of a programming
language that extensively uses the string datatype, associative arrays (that is, arrays indexed by key strings), and regular expressions. WIKI
Here are the most Kick ass voted AWK commands.

Windows Server 2008 Security Checklist

OpenSSL CheatSheet []

I recently got involved in a project where I defined the Baseline Security settings for windows and Linux. I used the settings provided by the Center for
Internet Security (CIS).
We decided on the following approach:
* Based on the CIS templates we created a baseline document specific to our company
* I, in my security role, created a Nessus .audit file, so we could audit compliance to our own baseline with Seccubus
* The windows administrator created GPOs to apply the settings.
When creating in the GPOs we did a strange discovery. In a windows the settings that are normally marked as MSS: in the category Computer Configuration
\Windows Settings\Security Settings\Local Policies\Security Options do not appear in a domain if its functional level is Windows 2008.
This made us wonder, have these setting become irrelevant ? If this is not the case, how can we still set them, preferably via group policy?
The settings are not irrelevant, as e.g. Peter van Eeckhoutte's blog points out. Windows 2008 does not forward IPv4 packets that have source routing on
them, but it does accept them if the machine is the final destination. However for IPv6 Windows 2008 will forward these packets by default.

The enemy in the network card []
Security expert Guillaume Delugré, who works for the Sogeti European Security Expertise Center (ESEC), has demonstrated that a rootkit doesn't necessarily
have to infest a computer. The expert used freely available tools and documentation to develop custom firmware for Broadcom's NetExtreme network controller.
He was then able to conceal a rootkit within the firmware, making it untraceable by the virus scanners usually installed on a PC.

Secure Development highlights of the week (please remember this and more news related to this category can be found here: Web Technologies):

For a long time I've said that security is a quality issue. It sounded good, it resonated with me, but by and large I am coming to the conclusion it's an
insufficient understanding. While I still believe the two issues are similar enough for discussion, the nuanced efforts required to fix a security bug vs.
a quality bug is night and day. Patching, as we'd fix a quality issue, has permeated the collective InfoSec mindset as a defensive solution in protecting
our infrastructures. Virtually or locally, however, relying on that patching mindset is a death sentence and will always lose to a skillful opponent.
I'd like to be careful to note here that striking back here doesn't necessarily imply "hacking back" as some others are proposing**. It simply means that
if, in the course of an interaction, we can make our opponent deal with our threats (read: countermeasures) we can regain initiative-and perhaps equally
important, time and space. There are unlimited opportunities for doing so. In fact, this is perhaps the most important bit to all of this- attack and
defense are the same. We always have the same opportunities to be creative and solve problems; it usually comes down to being bold enough to leverage them.

With the recent OWASP AppSec DC presentation on Slow HTTP POST DoS attacks, the issue of web server platform DoS concerns have reached a new high. Notice
that I said, web server platform and not web application code. The attack scenario raised by slow HTTP POST attack is related to web server software
(Apache, IIS, SunONE, etc...) and can not be directly mitigated by the application code. In the blog post, we will highlight the two main varieties of slow
HTTP attacks - slow request headers and slow request bodies. We will then provide some new mitigation options for the Apache web server platform with
Network DoS vs. Layer-7 DoS
Whereas network level DoS attacks aim to flood your pipe with lower-level OSI traffic (SYN packets, etc...), web application layer DoS attacks can often be
achieved with much less traffic. The point here is that the amount of traffic which can often cause an HTTP DoS condition is often much less than what a
network level device would identify as anomalous and therefore would not report on it as they would with traditional network level botnet DDoS attacks.

One of the most common vulnerabilities in web applications is known as HTML injection or cross-site scripting, and one of the simplest ways of showing such
a problem exists involves loading a JavaScript alert dialog. Those who understand the ramifications of such an issue know that it creates the potential for
far more malicious activity, but the alert box is an easy demonstration that the application can be automatically manipulated.
Other vulnerability, though, may be more subtle and not as readily visualized. Take cross-site request forgery, for example. It's easy to understand that
there's a problem when an application lets you manipulate the data of other users - the site should validate the account making requests before executing
them. What may not be so obvious is that problems can still arise even when the application checks the account first. If no system exists for verifying that
the account owner actually intended to perform a given action, it may be possible to hijack that user's session and make requests without them knowing. The
technical term for this behavior is cross-site request forgery.

Do you have a logstash? []
I'm a dev ops guy, and I've been talking about logging problems for a long while now. Talking about storing logs. Talking about parsing logs. Talking about
searching logs. Talking about reacting to logs. Now I'm at Loggly, I'm talking about it more than ever.
Today I'm releasing logstash, an Open Source tool to accomplish all that and more. You can read about the release on my blog and then go download the source
and get started with it.
If you want to see it in action, I've uploaded a demo video on YouTube. Also, Kord and I sat down today and chatted about logstash and its future. That
video is below.

In order to improve the security of applications running on distributed systems, six researchers at Cornell University have developed Fabric. It extends
Jif, also developed at Cornell, to add transactions, calls to functions on remote computers and the persistent storage of objects.
Different types of nodes are involved in the performance of fabric programs.
Source: The central idea behind Fabric and Jif is 'principals', which formulate and implement security requirements. Relationships and operators
allow users, processes, groups and application-specific units to be modelled each with their own security requirements

Robert Abela is a Technical Manager at Acunetix and in this interview he discusses the process of choosing a web vulnerability scanner and underlines
several factors that should be taken into consideration in the decision-making process.
Which is the best web vulnerability scanner out there?
This question has been haunting the web application security field for quite some time and rest assured that no one will ever give you a definite answer.
What works for Mr A does not work for Mr B. This is because every website, or web application - as we call them today - is different. There are some
scanners that perform better than others on websites developed in PHP and others that might perform better on websites developed in .NET, and so on. Also,
people have different needs. Some just need a scanner to generate a PCI DSS compliance report. Others use it for consulting services, to assist them during
a penetration test, and therefore need a scanner that gives them as much information as possible about the target and one that includes a good set of tools
for easing the lengthy process of manual penetration testing.

Finally, I leave you with the secure development featured article of the week courtesy of OWASP (Development Guide Series):

Training Your Developers On Your Enterprise Security API

Introduce the API HOW TO

Start with the ASVS and Top Ten. Show how the custom API for your enterprise takes them into consideration. Show much easier it is to write a more secure application when security is designed at the beginning. Explain how objects are designed and what features can be tested with each. Allow for questions and discussion.

Using the API for Audit

The API has audit functions and your developers should use them. Show them how.

Next Steps

Create a feeback mechanism for developers to drive the next iteration of the API. Security is a process that should always be developing.

Source: link

Have a great week.