2017 BEST PRACTICEs CONFERENCES SERIES - BOOK YOUR PLACE TODAY!Other Events
EUROPE, Middle EAST & AFRICASTARTS IN:
NORTH and south americasSTARTS IN:
ORLANDO, FL USA
asia pacificSTARTS IN:
KOTA KINABALU, MALAYSIA
Industry Research : Imperva Research New Search Engine Hack Innovation
Imperva’s Hacker Intelligence Initiative (HII) today revealed that hackers are leveraging the power of search engines to successfully carry out attacks – and it’s risk free. Hackers, armed with a browser and specially crafted search queries ("Dorks"), are using botnets to generate more than 80,000 daily queries, identify potential attack targets and build an accurate picture of the resources within that server that are potentially exposed. Automating the query and result parsing enables the attacker to issue a large number of queries, examine all the returned results and get a filtered list of potentially exploitable sites in a very short time and with minimal effort. As searches are conducted using botnets, and not the hacker’s IP address, the attacker’s identity remains concealed.
"Hackers have become experts at using Google to create a map of hackable targets on the Web. This cyber reconnaissance allows hackers to be more productive when it comes to targeting attacks which may lead to contaminated web sites, data theft, data modification, or even a compromise of company servers" explained Imperva's CTO, Amichai Shulman. "These attacks highlight that search engine providers are need to do more to prevent attackers from taking advantage of their platforms."
Botnet based search engine mining:
In order to block automated search campaigns, today’s search engines deploy detection mechanisms which are based on the IP address of the originating request. Imperva’s investigation shows that hackers easily overcome these detection mechanisms by distributing their queries across different compromised machines i.e. the botnets.
During May and June its Application Defense Center (ADC) observed a specific botnet attack on a popular search engine. For each unique search query, the botnet examined dozens and even hundreds of returned results using paging parameters in the query.
The volume of attack traffic was huge: nearly 550,000 queries (up to 81,000 daily queries, and 22,000 daily queries on average) were requested during the observation period. The attacker was able to take advantage of the bandwidth available to the dozens of controlled hosts in the botnet to seek and examine vulnerable applications.
The Hacker’s 4 Steps for an Industrialised Attack:
1. Get a botnet. This is usually done by renting a botnet from a bot farmer who has a global network of compromised computers under his control.
2. Obtain a tool for coordinated, distributed searching. This tool is deployed to the botnet agents and it usually contains a database of dorks.
3. Launch a massive search campaign through the botnet. Our observations show that there is an automated infrastructure to control the distribution of dorks and the examination of the results between botnet parts.
4. Craft a massive attack campaign based on search results. With the list of potentially vulnerable resources, the attacker can create, or use a ready-made, script to craft targeted attack vectors that attempt to exploit vulnerabilities in pages retrieved by the search campaign. Attacks include: infecting web applications, compromising corporate data or stealing sensitive personal information.
Recommendations for Search Engines:
Search engine providers should start looking for unusual suspicious queries – such as those that are known to be part of public dorks-databases, or queries that look for known sensitive files (/etc files or database data files).
A list of IPs suspected of being part of a botnet and a pattern of queries from the botnet can be extracted from the suspicious traffic that is flagged by the analysis. Using these black-lists, search engines can then:
Apply strict anti-automation policies (e.g. using CAPTCHA) to IP addresses that are blacklisted. Google has been known to use CAPTCHA in recent years when a client host exhibits suspicious behaviour. However, it appears that this is motivated at least partly by desire to fight Search Engine Optimisation and preserve the engine’s computational resources, and less by security concerns. Smaller search engines rarely resort to more sophisticated defences than applying timeouts between queries from the same IP, which are easily circumvented by automated botnets.
Identify additional hosts which exhibit the same suspicious behaviour pattern to update the IPs blacklist.
Search engines can use the IPs black list to issue warnings to the registered owners of the IPs that their machines may have been compromised by attackers. Such a proactive approach could help make the Internet safer, instead of just settling for limiting the damage caused by compromised hosts.
Recommendations for Organisations:
Organisations need to be aware that, with the efficiency and thorough indexing of corporate information – including Web applications, the exposure of vulnerable applications is bound to occur. While attackers are mapping out these targets, it is essential that organisations prepare against exploits tailored against these vulnerabilities. This can be done by deploying runtime application layer security controls:
A Web Application Firewall should detect and block attempts at exploiting applications vulnerabilities.?
Reputation-based controls could block attacks originating from known malicious sources.
As Imperva’s 2011 H1 Web Application Attack Report (WAAR) showed, attacks are automated so, knowing that a request is generated by an automated process and probably coming from a known active botnet source, should be flagged as malicious.
Today's Tip of the Day - Benchmark Versus Best Agents
More Editorial From Imperva
Published: Monday, August 8, 2011