Monday, February 28, 2011

New Authentication Guidance

Draft Puts More Responsibility on Banks

A preliminary draft of new online authentication guidance from the Federal Financial Institutions Examination Council puts greater responsibility on the shoulders of financial institutions to enhance their security and prevent fraud.

The FFIEC has yet to formally unveil its long-awaited update to 2005's authentication guidance, but a December 2010 draft document entitled "Interagency Supplement to Authentication in an Internet Banking Environment" was reportedly distributed to the FFIEC's member agencies.

While it's likely that this draft will be amended before the final release of the new guidance, the current document calls for five key areas of improvement:

•Better risk assessments to help institutions understand and respond to emerging threats, including man-in-the-middle or man-in-the-browser attacks, as well as keyloggers;

•Widespread use of multifactor authentication, especially for so-called "high-risk" transactions;

•Layered security controls to detect and effectively respond to suspicious or anomalous activity;

•More effective authentication techniques, including improved device identification and protection, as well as stronger challenge questions;

•Heightened customer education initiatives, particularly for commercial accounts.

Risk Assessments

Risk assessments are addressed first in the draft, leveling some criticism at banking institutions for not being diligent about regular assessments.

The document says risk assessments should include regular reviews of internal systems, analyzing their abilities to:

•Detect and thwart established threats, such as malware;

•Respond to changes related to customer adoption of electronic banking;

•Respond to changes in functionality offered through e-banking;

•Analyze actual incidents of security breaches, identity theft or fraud experienced by the institution;

•Respond to changes in the internal and external threat environment.

Authentication for High-Risk Transactions

The FFIEC's definition of "high-risk transactions" remains unchanged. But the supplement does acknowledge that, since 2005, more consumers and businesses are conducting online transactions.

Layered Security

Layered security includes different controls at different points in a transaction process. If one control or point is compromised, another layer of controls is in place to thwart or detect fraud. Agencies say they expect security programs to include, at minimum:

•Processes designed to detect and effectively respond to suspicious or anomalous activity;

•Enhanced controls for users who are granted administrative privileges to set up users or change system configurations, such as defined users, users' privileges, and application configurations and/or limitations.

Effectiveness of Authentication Techniques

Part of the layered security approach, the draft suggests, should include stronger device identification, which could include use of "one-time" cookies to create a more complex digital fingerprint of the PC by looking at characteristics such as PC configuration, Internet protocol address and geo-location.

Although no device authentication method can mitigate all threats, the supplement says, "the Agencies consider complex device identification to be more secure and preferable to simple device identification."

The need for stronger challenge questions is also noted, as yet another layer institutions can use to authenticate and identify a device and a user.

Customer Education and Awareness

As part of the effort to educate consumer and commercial customers about fraud risks and security measures, the draft states financial institutions should explain what protections are and are not provided under Regulation E. The drafted guidance also suggests banking institutions offer:

•An explanation of under what circumstances and through what means the institution may contact a customer and request the customer's electronic banking credentials;

•A suggestion that commercial online banking customers perform a related risk assessment and controls evaluation periodically;

•A listing of alternative risk control mechanisms that customers may consider implementing to mitigate their own risk;

•A listing of institutional contacts for customers' discretionary use in the event they notice suspicious account activity or experience customer information security-related events.

Stronger Fraud Detection

Beyond the supervisory expectations, the draft guidance includes an appendix that discusses the current threat landscape and compensating controls, including anti-malware software for customers, as well as transaction monitoring/anomaly detection software.

Similar Guidance in Australia?

Well - I am not sure, if we have something like Federal Financial Institutions Examination Council (FFIEC) or similar council in Australia. Until, we find the answer for the question, we should start using the available guideliness available.

Friday, February 25, 2011

Botnets grow and attacks will evolve

Cybercrime Outlook 2020 From Kaspersky Lab

Websites hiding malware will evolve, as will botnets and the sophistication of attacks.

The depressing news form part of
Kaspersky's 2011-2020 cybercrime outlook report, which not only tells us what's happening now but predicts what we can expect in the year 2020.

According to the company’s analysts, the most significant trends of the last ten years (2001-2010) were:

•Mobility and miniaturization. Smaller and smaller devices can now access the Internet from virtually any point on the globe; making wireless networks the most popular method of connecting to the web.

•The transformation of virus writing into cybercrime.

•Windows maintaining its leading position as a vendor of operating systems for personal computers.

•Intense competition in the mobile platform market with no clear-cut leader.

•Social networks and search engines – the primary services of today’s Internet.

•Internet shopping – this sector already generates revenues that dwarf the annual budgets of some countries.

Back to now, it seems cyber criminals are moving away from sites that offer up illegal content such as pirated films and music, and onto sites that offer us services such as shopping and gaming. These attacks will often catch those who are too too au fait with technology, using a hidden piece of Java code, which runs and redirects to malicious websites.

That's not all we have to worry about with the company also claiming that within the next nine years, we'll see some major changes that will affect the way we use PCs and the way hackers target us.

According to Kaspersky they have two ways of doing this. They can either make a weaker operating system their target, or specialise in Windows-based attacks on corporations.

This leads nicely into the next prediction that cybercrime by 2020 will be split into two groups.

The first will specialise in attacks on businesses, sometimes to order. They will include commercial espionage, database theft and corporate reputation-smearing attacks, all of which will be in demand on the black market.

Kaspersky predicts "hackers and corporate IT specialists will confront each other on the virtual battlefield."

The second group will target what influences our everyday lives, such as transport systems and other services as well as stealing personal data.

As we become more evolved with technology and look at new ways to communicate without keyboards, spammers will have to work harder to send out those pesky mails. They'll do it though, with Kaspersky claiming the "volume of mobile spam will grow exponentially, while the cost of internet-based communications will shrink due to the intensive development of cellular communication systems."

Tuesday, February 22, 2011

Is it safe to talk about cybercrime?

Security incidents remain secret amongst CIOs, but is there good reason to be more vocal?

Few CIOs will discuss their secur­ity incidents in public — with good reason — but there are many compelling reasons for more openness.

You cannot ignore the risk of cyber-attack. No vulnerability will be left unturned by cybercriminals, bedroom hackers, pressure groups or anonymous citizens looking to disrupt an organisation’s business.

Attacks of all kinds are increasing in ­intensity and more firms are having to admit to being victims of them. This means that any company currently embarrassed by an attack or incident should take some solace from the fact that they are not alone.

Equally significant is the fact that the firms were apparently ready and willing to discuss what was happening, and what they were doing about it.

The most recent and perhaps most not­orious attacks came as part of the fallout from the WikiLeaks scandal when each side of the privacy debate began a war of attrition that sought to prevent the other from having their place on the internet.

Firms including Amazon and Visa ref­used to support the WikiLeaks cause by hosting its documents or accepting donations for it respectively, and became enemies of a loose collective of hacktivists that lurk under the name ‘Anonymous’. Anonymous, which had already showed its muscle in opposition to the Digital Economy Act, acted swiftly and effectively.

Refer here to read more details.

Monday, February 21, 2011

Dynamic Authentication - Visa Technology Innovation Program

New Technology Innovation Program is All About Secure Transactions

A move toward EMV can help merchants cut their security compliance costs

That's the message from Visa Inc., which last week announced the launch of the Visa Technology Innovation Program, which is designed to eliminate eligible international merchants from annual validations of their compliance with the Payment Card Industry Data Security Standard.

The goal: to encourage merchants to move toward dynamic data authentication, which EMV chip technology makes possible.

In order to qualify for the Technology Innovation Program, international merchants in EMV markets must prove that at least 75 percent of their transactions are EMV chip transactions. They also must validate previous compliance with the PCI-DSS, and they cannot have a breach of cardholder data history on their records. The program takes effect March 31.

Friday, February 18, 2011

Cybercrime Index: How Identities were stolen today?

Online Dangers Alert

Anti-virus firm Symantec has launched a cybercrime tracking service in a bid stop computer users from becoming complacent about online threats.

The Free www.nortoncybercrimeindex.com service names the most threatening viruses for the day, where most identities were stolen in the previous day, and if hacker activity is higher or lower than average. The data is compiled via Symantec's Norton software and internet monitoring.

According to the service, hackers were today responsible for stealing 78 percent of online identities and 21 percent were accidentally made public. About two-thirds of stolen identities were taken from the manufacturing industry, while 18 percent were nicked from communications or public relations business.

The threat of identity theft has doubled in the past two days and "translator" is the most popular word being inserted on infected websites to lure in search engine users.
1314.QQ[dot]com remains the world's most dangerous website.
Norton 360 version 5.0 will provide quick access to the index data and reveal how many attacks are detected in which suburbs.

Anti-virus company AVG said: "Cybercriminals were increasingly using phones to rob people". The company found more than one-third of smartphone owners were unaware of the increasing security risks associated with using mobiles to buy goods or store personal data.

Wednesday, February 16, 2011

Aussie banks expose credit card details

Australia's biggest banks are posting credit card numbers in clear view on mailed customer statements in a direct violation of credit card security regulations.

Placing numbers where any mail thief could grab them is a fundamental breach of the troubled Payment Card Industry Card Data Security Standard (PCI DSS), according to sources in the industry.

The industry standard, drafted by card issuers Visa, MasterCard and American Express and enforced by banks, is a series of security rules to which any business dealing with credit card transactions must adhere.

The standard is a collaborative industry effort to reduce financial fraud by mandating baseline security measures that essentially must accompany any credit card transaction. A call centre operator, for example, would be required to destroy a paper note if it was used to temporarily jot down a credit card number, while a website that stores transaction information must ensure it is adequately secure.

Non-compliant large businesses — or Tier 1 organisations bound by strict rules — face hundreds of thousands of dollars in fines, and risk losing their ability to process credit cards. The fines scale according to the number of credit card transactions processed.

But St George and the Commonwealth Bank have breached rule 101 of the standard by sending out potentially millions of paper statements to letterboxes that clearly detail credit card numbers in full.

Refer here for more details.

Tuesday, February 15, 2011

Hacking attacks from China hit energy companies worldwide

Global Energy Cyberattacks: “Night Dragon”

Security researchers at McAfee have sounded an alarm for what is described as “coordinated covert and targeted cyberattacks” against global oil, energy, and petrochemical companies.


McAfee said the attacks begain November 2009 and combined several techniques — social engineering, spear phishing and vulnerability exploits — to load custom RATs (remote administration tools) on hijacked machines.

The attacks, which McAfee tracked to China, allowed intruders to target and harvest sensitive competitive proprietary operations and project-financing information with regard to oil and gas field bids and operations.

We have identified the tools, techniques, and network activities used in these continuing attacks—which we have dubbed Night Dragon—as originating primarily in China. Through coordinated analysis of the related events and tools used, McAfee has determined identifying features to assist companies with detection and investigation. While we believe many actors have participated in these attacks, we have been able to identify one individual who has provided the crucial C&C infrastructure to the attackers.

The company released a white paper to outline the attacks, which included the use of SQL injection and password cracking techniques.

Refer here for more details.

Sunday, February 13, 2011

What have we learned from Conficker?

Conficker has been somewhat of a catalyst to help unify a large group of professional and academic whitehats

Conficker is the name applied to a sequence of malicious software. It initially exploited a flaw in Microsoft software, but has undergone significant evolution since then (versions A through E thus far).

Nearly from its inception, Conficker demonstrated just how effective a random scanning worm can take advantage of the huge worldwide pool of poorly managed and unpatched internet-accessible computers. Even on those occasions when patches are diligently produced, widely publicized, and auto-disseminated by operating system and application manufactures, Conficker demonstrates that millions of Internet-accessible machines may remain permanently vulnerable.

In some cases, even security-conscious environments may elect to forgo automated software patching, choosing to trade off vulnerability exposure for some perceived notion of platform stability.

Another lesson of Conficker is the ability of malware to manipulate the current facilities through which internet name space is governed. Dynamic domain generation algorithms (DGAs), along with fast flux (domain name lookups that translate to hundreds or thousands of potential IP addresses), are increasingly adopted by malware perpetrators as a retort to the growing efficiency with which whitehats were able to behead whole botnets by quickly identifying and removing their command and control sites and redirecting all bot client links.

While not an original concept, Conficker's DGA produced a new and unique struggle between Conficker's authors and the whitehat community, who fought for control of the daily sets of domains used as Conficker's internet rendezvous points.

Yet another lesson from the study of Conficker is the ominous sophistication with which modern malware is able to terminate, disable, reconfigure, or blackhole native OS and third-party security services..

Today's malware truly poses a comprehensive challenge to our legacy host-based security products, including Microsoft's own anti-malware and host recovery technologies. Conficker offers a nice illustration of the degree to which security vendors are challenged to not just hunt for malicious logic, but to defend their own availability, integrity, and the network connectivity vital to providing them a continual flow of the latest malware threat intelligence.

To address this concern, we may eventually need new OS services specifically designed to help third-party security applications maintain their foothold within the host.

Friday, February 11, 2011

Fraud Incidents More Expensive - Javelin Strategy & Research

ID Fraud: New Accounts Most at Risk

The latest consumer fraud trends suggest that financial institutions must provide increasing leadership in the fight against identity-related fraud.


According to new findings from Javelin Strategy & Research, consumers and law enforcement alike now turn to banks and credit unions for more sophisticated detection and prevention when it comes to the misuse of stolen identities to open new accounts.

In its annual Identity Fraud Survey report, Javelin finds that losses from new account fraud far exceed those associated with other types of ID fraud. Moreover, new account fraud is harder to detect.

While Javelin finds that the number of ID fraud incidents dropped 28 percent from 2009, when ID fraud reached an all-time high, in 2010 the expense associated with recovering from ID fraud increased 66 percent.

Please refer here to download the report.

Wednesday, February 9, 2011

Monitoring of Power Grid Cyber Security

Efforts to Secure Nation’s Power Grid Ineffective

The official government cybersecurity standards for the electric power grid fall far short of even the most basic security standards observed by noncritical industries, according to a new audit.


The standards have also been implemented spottily and in illogical ways, concludes a Jan. 26 report from the Department of Energy’s inspector general (.pdf). And even if the standards had been implemented properly, they “were not adequate to ensure that systems-related risks to the nation’s power grid were mitigated or addressed in a timely manner.”

At issue is how well the Federal Energy Regulatory Commission, or FERC, has performed in developing standards for securing the power grid, and ensuring that the industry complies with those standards. Congress gave FERC jurisdiction in 2005 over the security of producers of bulk electricity — that is, the approximately 1,600 entities across the country that operate at 100 kilovolts or higher. In 2006, FERC then assigned the North American Electric Reliability Corporation (NERC), an industry group, the job of developing the standards.

The result, according to the report, is deeply flawed.

The standards, for example, fail to call for secure access controls — such as requiring strong administrative passwords that are changed frequently. or placing limits on the number of unsuccessful login attempts before an account is locked. The latter is a security issue that even Twitter was compelled to address after a hacker gained administrative access to its system using a password cracker.

The report is particularly timely in light of the discovery last year of the Stuxnet worm, a sophisticated piece of malware that was the first to specifically target an industrial control system — the kind of system that is used by nuclear and electrical power plants.

The security standards, formally known as the Critical Infrastructure Protection, or CIP, cybersecurity reliability standards, were in development for more than three years before they were approved in January 2008. Entities performing the most essential bulk electric-system functions were required to comply with 13 of the CIP requirements by June 2008, with the remaining requirements phased in through 2009.

The report indicates that this time frame was out of whack, since many of the most critical issues were allowed to go unaddressed until 2009. For example, power producers were required to begin reporting cybersecurity incidents and create a recovery plan before they were required to actually take steps to prevent the cyber intrusions in the first place — such as implementing strong access controls and patching software vulnerabilities in a timely manner.

The standards are also much less stringent than FERC’s own internal security policy. The standards indicate passwords should be a minimum of six characters and changed at least every year. But FERC’s own, internal security policy requires passwords to be at least 12 characters long and changed every 60 days.

One of the main problems with the standards seems to be that they fail to define what constitutes a critical asset and therefore permit energy producers to use their discretion in determining if they even have any critical assets. Any entity that determines it has no critical assets can consider itself exempt from many of the standards. Since companies are generally loathe to invest in security practices unless they absolutely have to — due to costs — it’s no surprise that the report found many of them underreporting their lists of critical assets.

“For example, even though critical assets could include such things as control centers, transmission substations and generation resources, the former NERC Chief Security Officer noted in April 2009 that only 29 percent of generation owners and operators, and less than 63 percent of transmission owners, identified at least one critical asset on a self-certification compliance survey,” the report notes.

Refer here to download the report.

Saturday, February 5, 2011

What is network Scanning?

Examine your Network With Nmap

Network scanning is an important part of network security that any system administrator must be comfortable with. Network scanning usally consists of a port scanner and vulnerability scanner.

Port scanner is a software that was designed to probe a server or host for open ports. This is
often used by administrators to verify security policies of their networks and can be used by an attacker to identify running services on a host with the view to compromise it. A port scan sends client requests to a server port addresses on a host for finding an active port. The design and operation of the Internet is based on TCP/IP. A port can have some behavior like below:
  • Open or Accepted: The host sent a reply indicating that a service is listening on the port.
  • Closed or Denied or Not Listening: The host sent a reply indicating that connections will be denied to the port.
  • Filtered, Dropped or Blocked: There was no reply from the host.
Port scanning has several types such as: TCP scanning, SYN scanning, UDP scanning, ACK scanning, Window scanning, FIN scanning, X-mas, Protocol scan, Proxy scan, Idle scan, CatSCAN, ICMP scan.

TCP scanning

The simplest port scanners use the operating system’s network functions and is generally the next option to go to when SYN is not a feasible option.

SYN scanning

SYN scan is another form of TCP scanning. Rather than use the operating system’s network functions, the port scanner generates raw IP packets itself, and monitors for responses. This scan type is also known as halfopen scanning, because it never actually opens a full TCP connection.

UDP scanning

UDP is a connectionless protocol so there is no equivalent to a TCP SYN packet. If a UDP packet is sent to a port that is not open, the system will respond with an ICMP port unreachable message. If a port is blocked by a firewall, this method will falsely report that the port is open. If the port unreachable message is blocked, all ports will appear open.

ACK scanning

This kind of scan does not exactly determine whether the port is open or closed, but whether the port is filtered or unfiltered. This kind of scan can be good when attempting to probe for the existence of a firewall and its rule sets.

FIN scanning

Usually, firewalls are blocking packets in the form of SYN packets. FIN packets are able to pass by firewalls with no modification to its purpose. Closed ports reply to a FIN packet with the appropriate RST packet, whereas open ports ignore the packet on hand.

Nmap support large number of this scanning. A vulnerability scanner is a computer program designed to assess computers, computer systems, networks or applications for weaknesses. It is important that the network administrator is familiar with these methods.

There are many types of software for scanning networks, some of this software is free and some are not, at Sectools you can find list of this software. The significant point about Nmap (Network Mapper) is Free and Open Source. Nmap is a security scanner originally written by Gordon Lyon (also known by his pseudonym Fyodor Vaskovich) for discover hosts and services on a computer network. Nmap runs on Linux, Microsoft Windows, Solaris, HP-UX and BSD variants (including Mac OS X), and also on AmigaOS and SGI IRIX.

Nmap includes the following features:
  • Host Discovery
  • Port Scanning
  • Version Detection
  • OS Detection
  • Scriptable interaction with the target
Nmap Works in two modes, in command line mode and GUI mode. Graphic version of Nmap is known as Zenmap. Official GUI for Nmap versions 2.2 to 4.22 are known as NmapFE, originally written by Zach Smith. For Nmap 4.50, NmapFE was replaced with Zenmap, a new graphical user interface based on UMIT, developed by Adriano Monteiro Marques. Working with Zenmap is easy and have a good environment for work.

Friday, February 4, 2011

Cyber security has become Australia's "fundamental weakness"

Australia's cyber security 'weak' - report

AUSTRALIA is increasingly ill-equipped to deal with cyber attacks on the country's energy, water, transport and communications systems, a report states.
The study by security think-tank Kokoda Foundation, to be released today, argues cyber security has become Australia's "fundamental weakness".

"A broader understanding of the nature, scale and extent of online threats to private information is crucial to the ongoing security of this country," report co-author John Blackburn said. Mr Blackburn is a former deputy chief of the air force.

The report recommends national security adviser Duncan Lewis be given lead responsibility for coordinating cyber security "across government".

It also suggests a federal minister be given specific responsibility for tackling online threats and a 10-year plan be developed to manage cyberspace.

The full report will be released in Canberra later today.

Refer here to read the news.

Wednesday, February 2, 2011

Some Lessons to be Learned from Stuxnet

STUXNET creators not so ELITE?

Everyone knows what Stuxnet is and if you don’t you probably missed the most discussed and much praised worm of the past few years.

The worm targeting Siemens systems, controlling critical power infrastructures, has been subject of deep analysis by researchers to uncover who’s behind it and who the final target was. Both of the above questions had readily found an answer: at least according to the authoritative Times, It’s been a joint effort between US and Israel governments, to destroy alleged Iranian projects to build a nuclear arsenal.

Although the goal has not been reached, Iranian path to having nuclear bombs has been set back by 2 years, as President Obama, although skirting the Stuxnet issue, stated in an interview regarding Iran. The much hyped Stuxnet, dubbed as the most sophisticated worm ever, has also been subject of analysis of Tom Parker. Tom is a security researcher who has presented his own analysis and view of the Stuxnet case at BlackHat DC.

For the first time, someone states that Stuxnet worm is not so elite as everybody thought in the beginning and probably media played an important role in the matter. Still according to Parker, too many mistakes (have been) made and too many logic flaws made things go wrong. Parker seconded the hypothesis according to which code was produced by two separate groups: one building the core of it and another, much less experienced, providing the exploits and the command and control code.

Another security expert, Nate Lawson, considers Stuxnet code nothing more elite than any other malware around, not even implementing advanced obfuscation techniques such as anti-debugging routines.

Some more interesting links to learn more details and lesson learnt analysis:
Strategic Lessons of Stuxnet
ICS-CERT Stuxnet Lessons Learnt
Stuxnet Lesson Learned: The Twain Always Meet

Tuesday, February 1, 2011

Stuxnet - Interesting white paper from Tier-3

Stuxnet: Doomday Bug or Media Hype?

The hot security story of 2010, Stuxnet, the turning point in IT security according to some experts. But was it Y2K-style hype, Doomsday for SCADA systems and ICSs – or a warning for everyone?

This fully-referenced Short White Paper examines:
  • The actual threat that Stuxnet poses.
  • The parallels between Stuxnet in ICSs and e-espionage in all IT networks.
  • How to protect your enterprise against these emerging threats.
I hope you find it useful, to read this paper, please use this link: Stuxnet: Doomday Bug or Media Hype?