Friday, March 28, 2008
Pangolin is a GUI tool running on Windows to perform as more as possible pen-testing through SQL injection. This version now supports following databases and operations:
MSSQL : Server informations, Datas, CMD execute, Regedit, Write file, Download file, Read file, File Browser...
ORACLE : Server informations, Datas, Accounts cracking...
PGSQL : Server informations, Datas, Read file...
DB2 : Server informations, Datas, ...
INFORMIX : Server informations, Datas, ...
SQLITE : Server informations, Datas, ...
Access : Server informations, Datas, ...
SYBASE : Server informations, Datas, etc.
Specify any HTTP headers(User-agent, Cookie, Referer and so on)
Bypass firewall setting
Detailed check optio ns
Injection-points management etc.
What's the differents to the others?
Easy-of-use : What I try to do is making pen-tester more care about result, not the process. All you should do is clicking the buttons. Amazing Speed : so many people told you things about brute sql injection, is it really necessary? Forget char-by-char, we can row-by-row(of cource, not every injection-point can do this)?
The exact check method : do you really think automated tools like AWVS,APPSCAN can find all injection-points?
So, whatever, just check it out, and then enjoy your feeling ;)
More information : http://www.nosec.org/web/index.php?q=pangolin
Download : http://seclab.nosec.org/security/pangolin_bin.rar
Declare: Pangolin is designed for security testing by pen-tester when he has been authorized. DO NOT attack any website viciously or accept the consequences!!!
Wednesday, March 26, 2008
Webshag is a free, multi-threaded, multi-platform web server audit tool. Written in Python, it gathers commonly useful functionalities for web server auditing like website crawling, URL scanning or file fuzzing. It also provides innovative functionalities like the capability of retrieving the list of domain names hosted on a target machine and file fuzzing using *dynamically* generated filenames (in addition to common list-based fuzzing).
Webshag URL scanner and file fuzzer are aimed at reducing the number of false positives and thus producing cleaner result sets. For this purpose, webshag implements a web page fingerprinting mechanism resistant to content changes. This fingerprinting mechanism is then used in a false positive removal algorithm specially aimed at dealing with “soft 404″ server responses.
Webshag provides a full featured and intuitive graphical user interface as well as a text-based command line interface.
Tuesday, March 25, 2008
Acunetix, last year in November, launched a Free edition of its popular web vulnerability scanner, which allows companies to check for cross site scripting vulnerabilities in their websites at no charge. The Free Edition of Acunetix Web Vulnerability Scanner (WVS) is available here.
What is Cross Site Scripting?
In a study conducted by Acunetix, 42% of the websites scanned with Acunetix WVS were found to be vulnerable to Cross Site Scripting.
“Companies don’t realize the danger their web sites are under and are therefore reluctant to invest in web vulnerability scanners. Consequently, security officers don’t have the tools to protect their websites. The free XSS scanner will give security officers access to a professional cross site scanning tool, that will allow them to assess their web sites for the cross site scripting danger,” said Jonathan Spiteri, Technical Manager of Acunetix.
Scanning for XSS vulnerabilities with Acunetix WVS Free Edition
To check whether your website has cross site scripting vulnerabilities, download the free edition from here. This version will scan any website / web application for XSS vulnerabilities and it will also reveal all the essential information related to it, such as the vulnerability location and remediation techniques. Scanning for XSS is normally a quick exercise (depending on the size of the web-site). A detailed guide how to scan for cross site scripting vulnerabilities can be found here.
The Free Edition also allows you to sample what other threats Acunetix WVS can find by allowing you to scan the Acunetix test sites for vulnerabilities.
Robert Hansen aka R-Snake has posted a very interesting article today over at his blog. As R-Snake states:
Whelp, we’ve talked about it, but now it’s finally possible. CSRF can now cause jail time. The FBI has begun arresting people who click on links to supposed child pornography. Now, I understand the noble pursuit, but there’s a fairly huge flaw in the old logic. I can force users to click on links anytime I want. Now here comes some interesting CSRF technology grey area. The authorities might, reasonably say, “The referrer doesn’t match.” Okay, well that’s what our good friend META refresh is for. I can force you to click on things without leaving a referring URL at all.I agree completely with R-Snake on this topic. While I would love taking down those trying to view child pornography, I think we should all be scared of a world where someone can simply force you to view a page through CSRF and possibly get you arrested for a very serious crime. It seems like with each new law related to technology, I get more and more scared of even using the internet.
So now the real question is would a user with no referring URL be worthy of investigation?
Wednesday, March 19, 2008
Firefox extensions enable users to customize Firefox with additional features. Generally, Firefox extensions are free, open-source, and easily downloaded as .xpi files. This article explains how to hack Firefox extensions of the .xpi variety. There are many reasons why someone would want to hack a Firefox extension — examples include: editing code, debugging errors, and learning extensions. This hack method requires a web browser, zip utility, and text editor.
Step 1: Secure an extension
By default, the Firefox browser will cache and attempt to install any "extension.xpi" file it encounters. Once Firefox installs the plugin, it becomes much more complicated to hack. Therefore, it is best to save an offline copy of the extension. This is easily accomplished with a browser such as IE that does not automatically install the extension, but rather provides an option to save a copy. With IE, simply right-click the extension.xpi link and "Save Target As..". Regardless of the method, the point here is to secure a local copy of the extension.xpi file. Remember to make a backup copy.
Step 2: Initial extraction
Once you have a willing extension.xpi file, open it with a zip utility and extract the files into some directory, say, "/xpicontents/". Within the /xpicontents/ directory there should be at least a "chrome" folder, an "install.rdf" file, and a "licence.txt" file. Certain extensions may include additional and/or different files or folders. If anything looks too unfamiliar, extrapolate the method or find a different extension to use as you follow along.
Step 3: Editing the .rdf file
Step 4: Hacking the .jar file
Step 5: Repacking the extension
After the necessary edits have been made, it is time to put humpty back together again. The first step is to replace the original contents of the something.jar.zip file with the freshly edited contents. To do this, select both content and skin folders (or whichever contains the edited material), right-click and add the selected folders to the something.jar.zip file located within the /jarcontents/ directory. Then, rename something.jar.zip back to something.jar.
At this point, you are ready to (re)package the chrome folder, install.rdf file, and license.txt file into a new .zip file, which we will call "hacked-extension.zip". To do this, simply select all three items and zip them into a new file named hacked-extension.zip. Finally, rename hacked-extension.zip to match exactly the name of the original extension, extension.xpi.
Step 6: Installation and testing
Once the necessary edits have been made, it is time to install the extension. Open Firefox, drag-and-drop the extension, and click OK to install. Restart Firefox, activate the extension, and check its functionality. Lather, rinse, repeat. It may be a good idea to test the hacked extension under a variety of different user conditions. Or not. Whatever. At this point, it’s entirely up to you. You may also want to save a copy of the original extension together with your hacked extension along with a few notes, just in case.
Hacking Firefox Extensions Book
GUID Generator Web Service
Tuesday, March 18, 2008
Interesting project which has been started by Kevin Orrey. I would try to contribute something to this project soon.
He has defined the Penentratation Testing Framework, which is well defined and must-have list for all the pen testers. If you have any extra content, especially syntax, reference material and tools information try contributing to this project.
Monday, March 17, 2008
McAfee, uncovered a newer mass hack affecting over 10,000 web pages. That number has since doubled. This recent mass attack, which was similar to those reported by Dancho Danchev, but reference a JS file rather than an IFRAME.
The attack seems to have started more than a week ago, and nearly 200,000 web pages have been found to be compromised, most of which are running phpBB.
phpBB attacks rely on social engineering. phpBB mass hacks have occurred in the past, including those done by the Perl/Santy.worm back in 2004.
Here’s a brief video demonstrating how the phpBB attack looks from the end user’s perspective.
March 2008 - Mass Hack Demo from Schmooog on Vimeo.
Sunday, March 16, 2008
SecureWorks, one of the leading Security as a Service providers, announced last week that hackers are successfully scamming banking customers with spear phishing emails stating that their banking digital certificate has expired. The malicious emails state that in order for the bank customer to access their bank account, they must load a new certificate by clicking on an enclosed link.
Once they click on the link, they are actually downloading the Prg Banking Trojan. This banking Trojan, originally discovered by SecureWorks in December 2007, is one of the most sophisticated and lethal pieces of banking malware developed.
The Prg Banking Trojan enables the hacker to be alerted when the victim is doing online banking so the hacker can piggyback in on the session with the victim. This way the hacker can compromise the victim's bank account without using the victim’s username and password.
According to Don Jackson, Senior Security Researcher with SecureWorks' Counter Threat Unit™, the hackers behind the Prg Banking Trojan scam have successfully used the digital certificate ploy since September 2007. SecureWorks reported that the Prg Banking hackers targeted commercial banking customers last December and the one scam resulted in the theft of over $6 million dollars from banks in the US, UK, Spain and Italy.
Bank customers should avoid clicking on any links within emails from untrusted sources. Even if they recognize the sender, they should find some way, besides replying to the email, to verify the email’s authenticity such as calling the bank directly.
According to InfoWorld Trend Micro removed the infected pages from its Web site. While the attack is unfortunate for Trend Micro at least it had company.
McAfee says almost 200,000 Web pages have been compromised in a little more than a week.
Here’s what McAfee had to say:
The attack seems to have started more than a week ago, and nearly 200,000 web pages have been found to be compromised, most of which are running phpBB. This contrasts yesterday’s attack in that the vast majority of those were active server pages (.ASP). The ASP attacks are different than the phpBB ones in that the payload and method are quite different. Various exploits are used in the ASP attacks, where the phpBB ones rely on social engineering. phpBB mass hacks have occurred in the past, including those doneby the Perl/Santy.worm back in 2004.McAfee has a handy video of the attack that’s worth a look.
McAfee was following up an attack detailed on Wednesday that infected 10,000 pages. The Wednesday attack involved an “injection of script into valid web page to include a reference to a malicious .JS file (sometimes in the BODY, other times in the TITLE section). The .JS file uses script to write an IFRAME, which loads an HTML file that attempts to exploit several vulnerabilities.”
Not surprisingly, a lot of those vulnerabilities were ActiveX controls.
Friday, March 14, 2008
With so many opportunities for hackers to exploit Web technology, what can organizations do to protect their Web-Based assets?
First, think defensively. Instead of focusing only on how to attract users to your site, assume that some of those users will try to manipulate your applications. Help build security into your Web applications by testing for vulnerabilities throughout the development and delivery lifecycle. Use automated tools to help ensure that you are testing all your applications and detecting vulnerabilities that can slip through the cracks with manual testing. In addition, keep the following rule in mind:
* never trust data that comes from a user, and
* never make assumptions about the limits of a user's technologies.
In other words, all data from outside sources is potentially dangerous. Assume that anything a user could theoretically manipulate will be manipulated. More-over, just because a user is supposedly employing a specific technology, do not assume that it will constrain his or her actions. For example, even if a browser does not show hidden fields in a page's HTML code, you should assume that some users will be able to find and manipulate those fields before sending pages back to your server.
Wednesday, March 12, 2008
1) Application Environment:
* Identity, understand and accomodate your organization's security policies.
* Reconize infrastructure restrictions, such as services, protocols and firewalls.
* Identify hosting environment restrictions (e.g., virtual private network [VPN], sandboxing)
* Define the application deployment configuration.
* Define network domain structures, clustering and remote application servers.
* Identify database servers
* Identify which secure communication features the environment supports
* Address Web farm considerations (including session state management, machine-specific encryption keys, SSL, certificate deployment issues and roaming profiles). If the application uses SSL, identify the certificate authority (CA) and types to be used.
* Address required scalability and peformance criteria.
* Investigate the code trust level.
2) Input/Data validation and authentication:
* Assume that all client input is potentially dangerous.
* Identify all trust boundaries for identifiy accounts and/or resources that cross those boundaries.
* Define account management policies and a least-privileged accounts policy.
* Specify requirements for strong passwords and enforcement measures.
* Encrypt user credentials using SSL, VPN, IPsec or the like, and ensure that authentication information (e.g., tokens, cookies, tickets) will not be transmitted over non-encrypted connections.
*Ensure that minimal error information will be returned to the client in the event of authentication failure.
3) Session Management:
* Limit the session lifetime.
* Protect the session state from unauthorized access.
* Ensure that session identifiers are not passed in query strings.
Tuesday, March 11, 2008
A US company has launched a computer program that can turn most flash memory sticks, hard drives or iPods into "virtual" PCs that can run most programs that work on Windows XP.
The software, known as MojoPac, allows you to use any computer without leaving a trail of evidence.
Every time you plug your MojoPac-enabled device into any Windows XP PC , MojoPac automatically launches your environment on the host PC. Your communications, music, games, applications, and files are all local and accessible. And when you unplug the MojoPac device, no trace is left behind your information is not cached on the host PC.
This independence allows people to use public computers without a trace of their session being left behind. PCs typically store a record of activity long after the computer has been turned off. "It's a slick way to move from machine to machine," says Rob Enderle, founder of the Enderle Group, a research firm that follows the PC industry. "It's about as safe as you can get."
Monday, March 10, 2008
To address security-related issues as they pertain to Web Applications, organizations can employ four broad, strategic best practices.
1. Increase Security Awareness
This encompasses training, communication and monitoring activities, preferably in cooperation with a consultant.
Provide annual security training for all application team members: Developers, Quality Assurance Professionals, Analysts and Managers. Describe current attacks and a recommended remediation process. Discuss the organization's current security practices. Require developers to attend training to master the framework's prebuilt security functions. use vendor-supplied material to train users on commercial off-the-shelf (COTS) security tools, and include security training in the project plan.
Collect security best practices from across all teams and lines of business in your organization. Distribute them in a brief document and make them easily accessible on an intranet. Get your IT security experts involved early and develop processes that include peer mentoring. Assign a liaison from the security team to every application team to help with application requirements and design.
Ensure that managers stay aware of the security status of every application in production. Track security errors through your normal defect tracking and reporting infrastructures to give all parties visibility.
2. Categorize application risk and liability
Every organization has limited resources and must manage priorities. To help set security priorities, you can:
* Define risk thresholds and specify when the security team will terminate application services.
* Categorize applications by risk factors (e.g. Internet or Intranet vs. Extranet).
*Generate periodic risk reports based on security scans that match issues to defined risk thresholds.
* Maintain a database that can analyze and rank applications by risk, so you can inform teams of how their applications stack up against deployed systems.
3. Set a zero-tolerance enforcement policy
An essential part of governing the development and delivery process, a well-defined security policy can reduce your risk of deploying vulnerable or non compliant applications. During inception, determine which tests the application must pass before deployment, and inform all team members. Formally review requirements and design specifications for security issues during inception and elaboration - before coding begins. Allow security exceptions only during design and only with appropriate executive - level approval.
4. Integrate security testing throughout the development and delivery process
By integrating security testing throughout the delivery lifecycle, you can have significant positive effects on the design, development and testing of applications. You should base functional requirements on security tests your application must pass, making sure that your test framework:
* Use automated tools and can run at any point during the development and delivery process.
* Includes unit and system test as well application-level tests.
* Allow for audit testing in production.
* Uses an agile development methodology for security procedures.
* Can be run during coding, testing, integration and production activities.
Friday, March 7, 2008
By using security-specific processes to create applications, software development teams can guard security violations. Specifically, we can apply several basic guidelines to existing applications and new or re-engineered applications throughout your process to help achieve greater security and lower remediation costs, such as:
- Discover and create baselines: Conduct a complete inventory of applications and systems, including technical information (e.g. Internet Protocol [IP], Domain name system [DNS], OS used), plus business information (e.g., Who authorized the deployment? Who should be notified if the application fails?). Next, scan your Web infrastructure for common vulnerabilities and exploits. Check list servers and bug tracking sites for any known attacks on your OS, Web server and other third-party products. Prior to loading your application on a server, ensure that the server has been patched, hardened and scanned. Then, scan your application for vulnerabilities to known attacks, looking at HTTP requests and other opportunities for data manipulation. And, finally, test application authentication and user-rights management features and terminate unknown services.
- Assess and assign risks: Rate applications and systems for risk - focusing on data stores, access control, user provisioning and rights management. Prioritize application vulnerabilities discovered during assessments. Review organizational, industry and governmental policy compliance. And identify both acceptable and unacceptable operations.
- Shield your application and control damage: Stay on top of known security threats and apply available patches to your applications and/or infrastructure. If you cannot fix a security issue, use an application firewall, restrict access, disable the application or relocate it to minimize exposure.
- Continuously monitor and review: Schedule assessments as part of your documented change management process. When you close one out, immediately initiate a new discovery stage.
Thursday, March 6, 2008
What makes Web applications vulnerable?
In the Open System Inconnection (OSI) reference model, every message travels through seven network protocol layers. The application layer at the top includes HTTP and other protocols that transport messages with content, including HTML, XML, Simple Object Access Protocol (SOAP) and Web services.
Today, i will focus on application attacks carried by HTTP - an approach that traditional firewalls do not effectively combat. Many hackers know how to make HTTP requests look benign at the network level, but the data within them is potentially harmful. HTTP-carried attacks can allow unrestricted access to databases, execute arbitrary system commands and even alter Web site content.
Without governance measures to manage security testing throughout the application delivery lifecyle, software teams can expose applications to HTTP-carried attacks as a result of:
- Analysts and architects viewing security as a network or IT issue, so that only a few organization security experts are aware of application-level threats.
- Teams expressing application security requirements as vague expectations or negative statements (e.g. You will not allow unprotected entry points) that make test construction difficult.
- Testing application security late in the lifecycle - and only for hacking attempts.
Wednesday, March 5, 2008
This is must-watch video from FOX channel. It is about “hacking into photo sites”, and “stealing” potentially embarrassing images to post up elsewhere which is called "fuskering".
According to Mike Andrews, “Fuskering” basically is…
- pulls a number of images/pages/etc within a range “expanding” request based off a pre-identified “pattern”. E.g. “www.example.com/image[1-3].jpg” becomes www.example.com/image1.gif, www.example.com/image2.gif, www.example.com/image3.jpg
- It generally relies on someone “finding” an image first, then “fuskering” for others that might be from the same user, like using known sequence numbers from digital camera images (eg. DSC12345.jpg - once you find one, other images are in that sequence either ascending or descending in time) as a good starting point.
- There are a number of tools out there that do this automatically. I’m not going to link to them, but any Google-fu and you should be able to find them. Personally, Perl and a shell script would have done it for me.
- Sites are mostly “vulnerable” because they use the security by obscurity pattern - if an image link is “known” by a user (either because they have been sent it, or because they have permission to see the link and therefore the site displays it to them), then the image is viewable. If someone has the time/resources to perform random requests, or crawl for one “interesting” image and then fusker for others, it’s quite likely that other image links could be discovered and then requested.
- The lack of an authorization check on displaying the image itself (rather than on the display of the link) is often one of the security trade-offs that a site might decide to make - displaying links to only the “allowed” images as a page is being created isn’t much of a performance trade off - the site has to dynamically generate the page anyway somehow (on demand, or pre-processed). However, performing a secondary authorization check during the request for an actual image (whether it’s to the .jpg, .gif, etc, directly or via a “proxy” script) may be too much of a performance hit if lots of users are accessing the site pulling images.