Thursday, 9 December 2010

Security Weekly News 09 December 2010 - Summary

Feedback and/or contributions to make this better are appreciated and welcome

Highlighted quotes of the week:

"Porting all those security fixes in PHP 5 back to PHP 4.4.9 is a PITA" - Steffan Esser (Still using PHP 4? Good luck!)
"Criticizing WAF tech is so "2009" - AppSec is so difficult, you need to use any help you can get" - Jim Manico
"Word of warning, stealing my USB stick and plugging it into you corporate computer will trigger enough AV alerts to cause an investigation and if it doesn't you have more to worry about than me" - Rob Fuller
"If you bitch about facebook privacy and then put your life story on your profile, expect me to ridicule." - Ryan Dewhurst

To view the full security news for this week please click here (Divided in categories, There is a category index at the top): The categories this week include (please click to go directly to what you care about): Hacking Incidents / Cybercrime, Software Updates, Business Case for Security, Web Technologies, Network Security, Database Security, Mobile Security, Privacy, Cloud Security, Tools, General, Funny

Highlighted news items of the week (No categories):

Updated/Patched: Sumatra PDF 1.2 released, VMware Security Updates VMSA-2010-0018, ProFTPD Compromise Report, WordPress all version 0day exploit, Google releases Chrome 8.0 stable, Critical Fix 2 for Kaspersky 2011, Winamp 5.601 Released


 
Part 1 (How to find your websites) of the series describes a process for website discovery. This piece (part 2) describes a methodology for rating the value
of a website to the business that many of our customers have found helpful. Website asset valuation is a necessary step towards overall website security
because not all websites are created equal. Some websites host highly sensitive information, others only contain marketing brochure-ware. Some websites
transact million of dollars each day, others make no money or maybe a little with Google AdSense. The point is we all have limited security resources (time,
money, people) so we need to prioritize and focus on the areas that offer the best risk reducing ROI.

 
GOVCERT: Botnets explained  [www.youtube.com]

 
1. The first risk is posting too much personal or private information.
2. Even if people are aware and careful what they post, they must understand that others can post private information about them.
3. The third risk is scams. This is nothing new, we discussed scams in topic #3 Email and IM.
4. Just like operating systems and smartphones, users should be careful of the 3rd party apps they use.
5. Finally, end users need to be taught no confidential organization information may be posted (such as publicly posting raid plans the day before a
military action). One good rule of thumb is if the information is not already on the company public website then don't post it.

 
Security Awareness Topic #5 - Browsers  [www.securingthehuman.org]
1. The first step is keeping browsers updated. Vendors are not only constantly patching browsers and fixing known vulnerabilities, but adding new security
features such as sandboxing. Always having the latest version is one of the best ways to help secure your browser and your system. Teach end users how to
check if their browser is updated and how to enable automatic updating.
2. The second step is minimizing plugins. The more plugins (or add-ons) a browser has installed, the greater the attack surface, the more likely a threat
can find a vulnerability. In fact, most browser based attacks now adays do not target the browser itself but plugins. In addition, we want to ensure that
whatever plugins we have installed our always current. Not sure? Check out one of my favorite end user tools Qualys's BrowserCheck.
3. The third step is checking URL's. We can teach end user's the basics of reading what is a domain name. If people are visiting PayPal's website,
paypal.com should be the domain name, not PayPal in the domain suffix or directory structure. New browsers make this much simpler by high lighting just the
domain people visit. If something looks suspicious sometimes browsers will highlight the URL in red.
4. Last we want to make sure that anything end user's download is scanned by anti-virus. Yes we all know that AV cannot detect all malware, but security is
all about reducing risk, not eliminating it.

 
A €3.7 MILLION EU-wide project aimed at improving data protection in Europe is to be led by researchers from Waterford Institute of Technology (WIT).
The Endorse project involves industry experts from the Netherlands, Italy, the UK, Spain, Austria and Ireland. Over the next year those involved hope to
develop software that will allow companies to check compliance with their own country's data protection legislation.




Cloud Security highlights of the week


 
Securing the future  [businessandleadership.com]
Concerns over security and data privacy in the cloud need to be seen in the context of what organisations are currently doing to protect their confidential
information, says Gordon Smith.
Survey after survey rate security concerns as the main obstacle to cloud computing. Whether the risks are perceived or real is to some extent irrelevant; as
long as they exist, cloud providers must address them or face reluctant customers, despite a persuasive business case that offers cost savings and flexible
technology to meet a company's needs.
...
Travel tips for a safe trip into the cloud
* Perform due diligence on the cloud provider you intend to use
* Ask rigorous questions about where data will be physically stored
* Evaluate what implications a cloud strategy has on compliance efforts
* Clearly define roles around protecting and securing data
* Don't assume security is someone else's responsibility
* Assess actual levels of security with in-house IT compared to the cloud
* Don't move the most sensitive company or customer information until the technology is well proven within the business

 
There have been many interesting tidbits that, as expected, are primarily focused on cloud computing and virtualization. That's no surprise as both are top
of mind for IT practitioners, C-level execs, and the market in general.
Another unsurprise would be the response to a live poll conducted at the event indicating the imaginary "cloud security" troll is still a primary concern
for attendees.
I say imaginary because "cloud security" is so vague as a descriptor that it has, at this point, no meaning.
Do you mean the security of the cloud management APIs? The security of the cloud infrastructure? Or the security of your applications when deployed in the
cloud? Or maybe you mean the security of your data accessed by applications when deployed in the cloud? What "security" is it that's cause for concern? And
in what cloud environment?
See, "cloud security" doesn't really exist any more than there are really trolls under bridges that eat little children.
Application, data, platform, and network security, however, do exist and are valid concerns regardless of whether such concerns are raised in the context of
cloud computing or traditional data centers.




Secure Network Administration highlights of the week (please remember this and more news related to this category can be found here: Network Security):


 

 

 
Episode #124: Levelling Up  [blog.commandlinekungfu.com]
Tim set himself up to bomb:
So I came up with the idea for this episode, totally my fault. And I knew going into it that I was setting myself for a significant beating from Hal. My
guess is that it will take him all of five minutes to write his portion. So here goes.
One of the nice features of Windows is the extremely granular permissions that can be granted on files and directories. This functionality comes at a price,
it makes auditing of permissions a big pain. Especially when it comes to groups, and even worse, nested groups. A few of my colleagues and I were looking
for files that would allow us to elevate our privileges from the limited user account one with more privileges. Files run by service accounts, or possibly
an administrator, and are also modifiable by a more limited user. In short, we were looking for files owned by an admin but writeable by a limited user.
Before we get into the fu, we need to look at how file permissions look in PowerShell.

 
Another fun attack that willis and I found during our SAP BusinessObjects research is that we could do internal port scanning by using Crystal Reports.
The way this works is that when you browse to a Crystal Reports web application (http://hostname/CrystalReports/viewrpt.cwr) there are a few parameters
which are used to communicate with the SAP services on the backend. The problem here is that these parameters are controlled by the user. Now a better way
to do this is to provide a drop-down list or make all the configurations done by the server.

 
In the modern client-focused threat landscape,JavaScript plays a very important role in delivering and executing attacks.Many browser-based vulnerabilities
are triggered by specific sets of JavaScript calls,and HTML,CSS,or PDF-based vulnerabilities often have accompanying payloads written in JavaScript.Thus,if
a defender can reliably detect malicious JavaScript,they can protect against the vast bulk of browser-based,client-side attacks - including 0-day delivered
with standard malicious JavaScript tricks.

 
DEP (Data Execution Prevention) and ASLR (Address Space Layout Randomization) have proven themselves to be important and effective countermeasures against
the types of exploits that we see in the wild today. Of course, any useful mitigation technology will attract scrutiny, and over the past year there has
been an increasing amount of research and discussion on the subject of bypassing DEP and ASLR [1,2]. In this blog post we wanted to spend some time
discussing the effectiveness of these mitigations by providing some context for the bypass techniques that have been outlined in attack research. The key
points that should be taken away from this blog post are:
* DEP and ASLR are designed to increase an attacker's exploit development costs and decrease their return on investment.
* The combination of DEP and ASLR is very effective at breaking the types of exploits we see in the wild today, but there are circumstances where they can
both be bypassed.
* Exploits targeting Microsoft and third party vulnerabilities have been created that are capable of bypassing DEP and ASLR in the context of browsers and
third party applications.
* We are currently not aware of any remote exploits that are capable of bypassing DEP and ASLR in the context of in-box Windows services and various other
application domains.
* Knowledge of potential bypass techniques directly informs our future work to improve the robustness and resiliency of DEP, ASLR, and our other mitigation
technologies

 
ZozzleAs browser-based exploits and specifically JavaScript malware have shouldered their way to the top of the list of threats, browser vendors have been
scrambling to find effective defenses to protect users. Few have been forthcoming, but Microsoft Research has developed a new tool called Zozzle that can be
deployed in the browser and can detect JavaScript-based malware at a very high effectiveness rate.
Zozzle is designed to perform static analysis of JavaScript code on a given site and quickly determine whether the code is malicious and includes an
exploit. In order to be effective, the tool must be trained to recognize the elements that are common to malicious JavaScript, and the researchers behind it
stress that it works best on de-obfuscated code. In the paper, the researchers say that they trained Zozzle by crawling millions of Web sites and using a
similar tool, called Nozzle, to process the URLs and see whether malware was present.




Secure Development highlights of the week (please remember this and more news related to this category can be found here: Web Technologies):


 
Anyone doing ASP.NET development probably admits, openly or not, to introducing or stumbling upon a security issue at some point during their career.
Developers are often pressured to deliver code as quickly as possible, and the complexity of the platform and vast number of configuration options often
leaves the application in a less than desirable security state. In addition, the configuration requirements for debugging and production are different,
which can often introduce debugging settings in production, causing a variety of issues.
Over the years, the ASP.NET platform has matured and better documentation has been made available through MSDN and community blogs, but knowing which
feature or configuration setting to use is often troublesome. Even with good knowledge of the security functionality, mistakes can happen that could result
in security vulnerabilities in your application.
Peer code review is a useful process and a good way to catch issues early. Still, not everyone has the time or budget-or knowledgeable peers at hand-for
such review.

 
HPP attacks consist of injecting encoded query string delimiters into other existing parameters. If a web application does not properly sanitize the user
input, a malicious user can compromise the logic of the application to perform either client-side or server-side attacks. One consequence of HPP attacks is
that the attacker can potentially override existing hard-coded HTTP parameters to modify the behavior of an application, bypass input validation
checkpoints, and access and possibly exploit variables that may be out of direct reach.
The consequences of the attack depend on the application's logic, and may vary from a simple annoyance to a complete corruption of the application's
behavior.

 
12 programming mistakes to avoid  [www.infoworld.com]
The dirty dozen of application development pitfalls -- and how to avoid these all-too-common programming blunders
A car magazine once declared that a car has 'character' if it takes 15 minutes to explain its idiosyncrasies before it can be loaned to a friend. By that
standard, every piece of software has character -- all too often, right of the box.
Most programming 'peculiarities' are unique to a particular context, rendering them highly obscure. Websites that deliver XML data, for example, may not
have been coded to tell the browser to expect XML data, causing all functions to fall apart until the correct value fills the field.

 
Lately it seems that a lot of people are talking about the potential security vulnerabilities of having an unrestricted crossdomain.xml. It's public
knowledge that this can be abused by an attacker setting up Cross Site Request Forgery.
Below is the sample code in the crossdomain.xml. This is a simple one. Some of the big websites that have these crossdomain.xml's unrestricted have alot
more data in the xml file. From the blogs that I have read in the community and from HP Web Inspect Remediation Guide "Exploiting a Vulnerability Involves
crafting a custom Flash Application".
<cross-domain-policy>
<site-control permitted-cross-domain-policies='all'/>
<allow-access-from domain='*'/>
</cross-domain-policy>
The fix is "not to design and deploy Flash APIS meant to be accessible to arbitrary third parties. It is also recommended "to host these on a sub domain".
We are not going to discuss how to exploit this vulnerability but rather to find it in the wild with 02.

 
Sessionmanagement in X-Header  [tar-xvzf.blogspot.com]
I recently stumbled upon a solution for session management where I'm searching hard to find the weak points but I failed so far.
There is this web 2.0 application which neither uses cookie nor it transports the session ID within the URL. The devs decided to transport the session ID
within an X-Header HTTP field which they send over on each (XHR) request. This seems smart from a range of perspectives:
1. They do not have to care about CSRF Protection, because no session identifier is sent without explicit intend.
2. They do not have to care about cached or otherwise leaked session ids, since the ID is held in RAM only and the user doesn't see (and thus cannot share
it accidentally) it.

 
Website Monocultures and Polycultures  [jeremiahgrossman.blogspot.com]
Before diving in let's first establish a baseline on the fundamental assumptions about software monocultures and polycultures. Monocultures, meaning all
systems are identical, are at elevated risk to systemic widespread compromise because all nodes are vulnerable to the same attack. For example, one zero-day
exploit (not necessarily required) has the capability of ripping through the entire ecosystem. The benefit of a monoculture however is the consistency of
all the connected nodes allow for easier management by IT. Manageability makes keeping patches up-to-date less difficult and by extension raises the bar
against targeted attacks and random opportunistic worms.
...
So if website attacks are generally targeted, again except for SQLi worms, and it's easier to secure code written all in the same language, then we should
be advocating monoculture websites. Right? Which is exactly the opposite of how the community seems to want to treat networks. I just found that to be
really interesting. What I'm working on now inside WhiteHat is trying to find statistical evidence in real terms how the security posture of the average
monoculture and polyculture compare. I'm guessing monoculture websites are noticebley more secure, that is, less vulnerabilities. But what would your theory
be?

 
As application inventories have become larger, more diverse, and increasingly complex, organizations have struggled to build application security testing
programs that are effective and scalable. New technologies and methodologies promise to help streamline the Secure Development Lifecycle (SDLC), making
processes more efficient and easing the burden of information overload.
In the realm of automated web application testing, today's technologies fall into one of two categories, Static Application Security Testing (SAST) and
Dynamic Application Security Testing (DAST). SAST analyzes application binaries or source code, detecting vulnerabilities by identifying insecure code paths
without actually executing the program. In contrast, DAST detects vulnerabilities by conducting attacks against a running instance of the application,
simulating the behavior of a live attacker. Most enterprises have incorporated at least one SAST or DAST technology; those with mature SDLCs may even use
more than one of each.

 
Virtual patching with mod security  [www.securityninja.co.uk]
As someone who is responsible for operational security I think that one of the biggest challenge I have to deal with is how to keep the systems and
applications up to date with no service interruptions.
It is not only a question of having good patching polices or procedures that dictate how you have to patch after a vulnerability is found in your platform.
The time required to analyse the vulnerability, develop a fix, test the fix and deploy it into production can leave a system vulnerable to attack for a
period of time which might not be acceptable to the business.

 
disabling websockets for firefox 4  [www.0xdeadbeef.com]
We've decided to disable support for WebSockets in Firefox 4, starting with beta 8 due to a protocol-level security issue. Beta 7 included support for the
-76 version of the protocol, the same version that's included with Chrome and Safari.
Adam Barth recently demonstrated some serious attacks against the protocol that could be used by an attacker to poison caches that sit in between the
browser and the Internet.
Once we have a version of the protocol that we feel is secure and stable, we will include it in a release of Firefox, even a minor update release. The code
will remain in the tree to facilitate development, but will only be activated when a developer sets a hidden preference in Firefox.




Finally, I leave you with the secure development featured article of the week courtesy of OWASP (Development Guide Series):


Secure Application Architecture Design



Identify and understand any corporate security policies and regulations

Identify any regulations or compliances you must adhere to at the state, federal, or industry level (PCI, HIPAA, SOX, etc) in addition to your corporate security policies. Data Classification and Sensitivity



-One of the first steps that should be taken is to understand the type of data that will be processed by the application and the sensitivity of the data. Understanding of this will also help identify any regulations that will also be imposed upon you such as HIPAA for handling patient information or PCI if it will be processing and storing cardholder data. This will also guide you through the SDLC and how to secure the application, its environment, and the data that it processes and stores.



Identify if any of the supporting application components are shared

-Are there other applications hosted on the same web or application server or is the database shared by other applications? Another insecure application can potentially compromise the entire server and thus other applications on the same server.



-Are separate functions of the application physically and logically separated? Will the application tiers be physically separated such as separate servers for application, web server, database, etc or will any of these be hosted on the same physical server (such as web server on port 80 and database on port 1433 on the same server). Additionally virtualization is very common to reduce cost and space requirements but also brings many security implications along with it. Additionally will any components of the application such as the database be shared with other applications outside of the current application?




Source: link



Have a great week and weekend.