REDMOND, Wash.–Cybercrime has developed in the last few years into a major concern, not just for the consumers and businesses that are victims, but also for governments around the world. Obama administration officials have called it one of the larger threats to the United States economy. While law enforcement agencies handle the investigative and prosecutorial piece of things, they are increasingly being aided by experts at companies such as Microsoft, Google and others that have unique insights into attackers’ activities and the capability to make life more difficult for them.
Microsoft, for one, has taken a very aggressive stance on cybercrime in recent years. By virtue of its massive user base, the company has a lot of visibility into the ways that attackers are exploiting not just Microsoft products, but also other applications installed on target machines. As one would imagine, Microsoft officials take a dim view of attackers going after their customers, and the company has been using a variety of methods for preventing cybercrime and punishing those involved in it. The most visible piece of this arsenal is the Microsoft Digital Crimes Unit, a small group of engineers, security experts and lawyers here who spend their days tracking botnet operators, malware writers and helping law enforcement agencies around the world identify and find them.
It may seem odd that a software vendor, even one as large and influential as Microsoft, would fund such a team, but company officials say it’s an important part of keeping customers safe. If cybercrime can’t be prevented, the DCU members want to be sure that it is less attractive and less profitable for the attackers who choose to get involved.
“The bad guys are getting better at what they do, and we want to be a force-multiplier for good. Our job is not law enforcement. Our goal is to transform this fight to really disrupt and destroy the way cybercriminals operate,” said T.J. Campana, director of security at the DCU.
The biggest target thus far for the DCU team has been the botnet problem. Botnets are used for a variety of nasty purposes, especially spam, DDoS attacks and data theft. Microsoft and other vendors have been tackling the problem from various angles for years now, but the tool that they’ve found to be the most effective involves a combination of legal and technical means of crippling a botnet. The company, along with law enforcement agencies and other vendors, have succeeded in taking down several botnets in the last few years, including Kelihos, Zeus, Waledac and Rustock. In many of these cases, along with sinkholing the target botnet’s command-and-control servers, the Microsoft DCU team has used court orders to physically seize servers. This tactic has been somewhat controversial, but Campana said the nature of the threats has made it necessary.
“Botnets are the backbone of the modern cybercrimnal,” he said. “We’re severing the connection between the harmed customer and the bad guys. We’ve had court orders to go in and rip the servers out of the data centers. It doesn’t get any cooler than that. But it’s an extraordinarily high burden of proof to be able to do that.”
Disrupting botnets can be a frustrating business, as Microsoft has found out, as attackers often will react to a takedown by simply moving to a new infrastructure, finding pliable hosting providers and getting back to business. That’s always going to be a possibility, especially when attackers are able to buy bot toolkits cheaply and quickly build up a new network of compromised machines.
“The cost of entry into cybercrime is very low and the profits are high. We want to increase the cost of the bad guys getting into the cybercrime business,” he said. “And if they do get in, we want to decrease their ability to make money. We want to demotivate this kind of activity.”
But Campana said the takedowns are only one piece of the larger picture. The company is in the process now of building a new cybercrime center at its headquarters here, and DCU officials hope to make it a nerve center for anti-cybercrime operations across the industry. As part of an effort to help speed up the pace at which it is able to respond to new emerging attacks on its customers, Microsoft DCU also is working on a new Cyber Threat Intelligence service which Campana said could serve as a two-way communication channel to help get information and remediation tools out to cybercrime victims much more quickly.
“I want to get to a place where the bad guy launches a new attack, and within a couple of minutes we can respond and get a message to victims,” he said. “I want that identification, notification and remediation happening as quickly as possible.”
Campana’s team also is working with a number of outside companies and groups to help make it more difficult for attackers to get access to the tools they need for their operations. One way they’re doing this is by working with hosting providers, which are key cogs in many cybercrime machines, especially botnets. Attackers often use so-called bulletproof hosting providers to house their C2 servers for botnets, malware distribution and phishing campaigns. But they also will take advantage of legitimate hosting providers who aren’t aware of what’s going on. Campana said his team is working with many hosting companies to fix this. They’re also talking with domain registrars to prevent attackers from being able to register dozens or hundreds of domains quickly for use in fast-flux botnets.
“A lot of the domains they register are just randomly generated numbers and letters. We’re talking with the hosting providers and registrars to say, let’s just not let these kind of domains be registered ever,” he said.
While the DCU has seen plenty of success so far, Campana said there’s no shortage of challenges looming for his team and others interested in disrupting cybercrime.
“The bad guys are moving at such a fast pace and they’re changing their tactics on the fly,” he said. “They don’t play by the rules. They don’t have any rules, and we have to find a way to make it harder for them.”
Many popular online services have started to deploy password strength meters, visual gauges that are often color-coded and indicate whether the password you’ve chosen is weak or strong based on the website’s policy. The effectiveness of these meters in influencing users to choose stronger passwords had not been measured until recently.
A paper released this week by researchers at the University of Cal Berkeley, University of British Columbia and Microsoft provides details on the results of a couple of experiments examining how these meters influence computer users when they’re creating passwords for sensitive accounts and for unimportant accounts.
The long and the short of it: It depends.
Users, despite a barrage of news about stolen credentials, identity theft and data breaches, will re-use passwords over and over, especially at account creation, regardless of the presence of a meter. If the context changes, however, and users are asked to change existing passwords on sensitive accounts, the presence of a meter does make some difference.
“I didn’t expect them to have any effect,” said Serge Egelman, a UC Berkeley researcher, in an interview with Threatpost. Egelman, along with University of British Columbia colleagues Andreas Sotirakopoulos, Ildar Muslukhov, and Konstantin Beznosov, and Cormac Herley of Microsoft, began their experiment as a means of testing a new type of meter they developed that measures password strength relative to other users. What they learned instead is that peer pressure isn’t as effective as the context in which the meter is shown.
The experiment was two-fold, first in a lab and then in the field. In both instances, none of the participants knew they were taking part in a password study. There was also a control condition for both studies where a meter was not presented. For sensitive accounts where users see a meter, Egelman said, the users deployed strong passwords. In the field experiment conducted against “unimportant accounts,” the meter made no difference and most of the time users re-used old passwords.
“We conclude that meters result in stronger passwords when users are forced to change existing passwords on important accounts and that individual meter design decisions likely have a marginal impact,” the team wrote.
Password re-use has some obvious risks, the worst being that if a hacker compromises one password on an unimportant account, for example, they could use that password on more sensitive accounts protected by the same secret code.
“We don’t have anything better [than passwords],” Egelman said. “That’s what it comes down to. All of the problems we generally see with passwords are as a result of poor policies and stems from the frequencies we see of databases getting disclosed. If more work was done to secure stored encrypted passwords, less effort would need to be done on the users’ end.”
With 75 percent of the Alexa top 20 websites using some sort of meter, Egelman said, there is an expectation that users will choose stronger passwords if a meter is present. The team’s experiments demonstrated noticeable changes in password strength with the presence of a meter if the user was prompted to change their password, for example because of a policy mandate that demands passwords be changed periodically. The test results show that the presence of either a weak-to-strong meter, or a meter comparing passwords against those of other users did nudge them toward stronger passwords, while those without a meter continued to re-use old or weak passwords. Users also chose longer passwords, used more symbols and lower-case letters.
The 47 participants were users affiliated with the University of British Columbia who used the school’s single sign-on system for access to student accounts and a campus portal. They were not informed they were taking part in a password study, instead were told they were testing the usability of the portal. Once they logged, a notice popped up that their passwords had expired per policy and they were required to change them.
The field experiment, meanwhile, was conducted against less important accounts for 541 participants, many of whom re-used weak, existing passwords. In an exit survey, only 13 percent remembered seeing strength meters and others said the meters would have labeled their passwords as weak.
“We found that reused passwords were not observably weaker than the passwords of those who claimed not to have reused passwords. Thus, the extent to which password reuse impacts strength remains unclear,” the team wrote in its paper. “We believe that effects stemming from participants’ perceptions about the unimportance of the website outweighed any effects relating to the meters or their choice to reuse existing passwords; when passwords were reused, weaker existing passwords were employed.”
The team concluded that the presence of meters upon site registration, for example, is not as effective as when the meters are not associated with a registration, and that participants are likely to choose weak, easy-to-remember passwords they’ve used before if not prompted to check their strength.
“We’re not going away from passwords any time soon. I would like to see more focus on acceptable password policies in terms of balancing the burdens on users with site security requirements,” Egelman said. “A lot of the burden is placed on users, and that results in forgetting passwords and those add up as costs for organizations in terms of resets and support calls. If sites did things differently in terms of how passwords were protected on the backend, a lot of password requirements could be loosened.”
The security community is one that thrives on controversy, drama and debate. For years–decades, really–no topic satisfied this desire like vulnerability disclosure. Long after every possible argument had been forwarded and the horse was not just dead but buried and the grave covered by a strip mall, the debate has limped along, like Happy Days post-shark jump. Now comes the flood of bilious opinions regarding the commercial exploit market, a discussion that feels even more pointless than the disclosure debate because there’s absolutely nothing to debate.
In the beginning, the disclosure debate was just that, a debate. People with well-formed opinions based on their experiences with finding and publishing vulnerabilities, or, on the other end of the equation, dealing with those reports and fixing the bugs. Most researchers argued that they had the right to do what they wanted with the vulnerabilities they found. For a long time, researchers generally kept details private and dealt with the vendors in the background, only publishing the details when a fix was ready. There were exceptions, researchers who simply published what they found whenever they felt like it, either never notifying the vendor or doing so a day or two before they posted their advisories.
That dynamic changed gradually as some researchers began using the possibility of full disclosure as a hammer to pressure vendors into responding to advisories more quickly and dealing with researchers in a professional manner. Some vendors got with the program, others didn’t. Some researchers chose to work with vendors within a loosely defined set of guidelines, others didn’t. And so it’s gone for the last decade or so.
There are reasonable arguments to be made on both sides of the disclosure debate, and there are smart, thoughtful people articulating a variety of positions. But there’s also a huge amount of invective, finger pointing and name-calling involved, all of which may be fun to watch, but it’s not very productive.
There are a lot of echoes of the disclosure debate in the current discussions about exploit sales. The commercial exploit market has developed relatively quickly, at least the public portion of it. Researchers have been selling vulnerabilities to a variety of buyers–government agencies, contractors, other researchers and third-party brokers–for years. But it was done mostly under cover of darkness. Now, although the transactions themselves are still private, the fact that they’re happening, and who’s buying (and in some cases, selling) is out in the open. As with the disclosure debate, there are intelligent people lining up on both sides of the aisle and the discussion is generating an unprecedented level of malice.
One difference this time around is that there are large piles of currency involved, not to mention the privacy, security–and in some cases, physical security–of people in countries around the world. Governments are buying exploits and using them for a variety of purposes. Some are using them to spy on their own citizens, while others are using them to attack their enemies’ networks. And government contractors and other private buyers are purchasing them for their own uses, as well.
Debating the morality or legality of selling exploits at this point is useless. This is a lucrative business for the sellers, who range from individual researchers to brokers to private companies. There are millions of dollars involved, and with that much money at stake, this business is not going away. And it is a business, make no mistake. Some sellers, such as VUPEN, say that they only sell exploits to NATO governments and will never sell to oppressive regimes. Chaouki Bekrar, the VUPEN CEO, has told me this many times, and I’ve heard him say the same thing to any number of other people in the last few years. I am inclined to believe him. But that’s almost beside the point. The issue is that once the exploit is sold, there’s no way to know how it will be used or who it may be shared with. A government buyer could act as a front for a third party that wouldn’t be able to buy the exploit on its own. And VUPEN is just one company. There are countless others that don’t have such explicit rules.
If you need a possible example, look no further than the odd situation that Moxie Marlinspike found himself in recently. Contacted by agents of the Saudi Arabian telecom company Mobily for help with technology to enable interception of traffic from Twitter, Viber and other apps, Marlinspike looked at a design document the group volunteered. He saw that they were contemplating buying SSL exploits as a way to solve their traffic-intercept problems. Marlinspike declined to help with the project, but said that he assumes Mobily will find a way around the issue.
“Their level of sophistication didn’t strike me as particularly impressive, and their existing design document was pretty confused in a number of places, but Mobily is a company with over 5 billion in revenue, so I’m sure that they’ll eventually figure something out,” Marlinspike, a security researcher and former Twitter security official, wrote.
“What’s depressing is that I could have easily helped them intercept basically all of the traffic they were interested in (except for Twitter – I helped write that TLS code, and I think we did it well). They later told me they’d already gotten a WhatsApp interception prototype working, and were surprised by how easy it was. The bar for most of these apps is pretty low.”
That kind of national-scale surveillance is just one application for exploits, commercial or otherwise. As Marlinspike said, even without his considerable knowledge and talent, it’s likely that Mobily had already found its own method for intercepting WhatsApp traffic. Governments, telecoms and other well-funded groups will find a way, whether it’s through their own research, the purchase of commercial exploits or some other method.
The debate shouldn’t be about whether exploits should be sold–they are, and nothing short of an outright legal ban is likely to change that. A commercial market has emerged for this information and markets with willing buyers and sellers don’t simply disappear. They typically expand until either the supply or the demand reaches a limit. There’s no shortage of demand for exploits right now, and the supply will continue to flow as long as the money is there.
Welcome to the era of surveillance.
Facebook users are being warned of malicious Firefox and Chrome extensions that can give an attacker remote control over a Facebook profile.
Microsoft has seen an increase in activity around these extensions, in particular in Brazil. The threat is detected as Trojan:JS/Febipos.A and has been updated recently.
“This Trojan monitors a user to see if they are currently logged in to Facebook. It then attempts to get a configuration file from the website <removed>[,]info/sqlvarbr.php,” said Jonathan San Jose of the Microsoft Malware Protection Center. “The file includes a list of commands of what the browser extension will do.”
The malware can add posts to a profile, like pages, join groups or invite others to join groups, chat and comment on posts. So far, Microsoft said it has seen posts in Portuguese on hijacked profiles trying to get users to click on a link, purported to be a video about a bullying-related suicide. Facebook has already blocked the link as malicious.
The Trojan, meanwhile, acts as a dropper and opens backdoor connections. When the malware infects Chrome, it tries to connect to du-pont.info/updates/[removed]/BL-chromebrasil[.]crx, while on Firefox, the connection is to du-pont.info/updates/[removed]/BL-mozillabrasil[.]xpi. The malware then attempts to update itself from either of those domains.
The malware’s capabilities and messages it posts to entice other users to infect themselves depends on the configuration file downloaded to the malware, Microsoft said. One link Microsoft shared as an example had 2,746 Likes, had been shared 167 times and had 165 comments, indicating a notable number of potential victims. Within hours after the initial analysis, all of those numbers had risen.
“There may be more to this threat because it can change its messages, URLs, Facebook pages and other activity at any time,” Microsoft’s San Jose said.
IE users are not at risk, Microsoft added.
Google and Mozilla have recently added protections that address threats via browser extensions. Google, in December, announced that it would halt silent extensions in Chrome. These used to be done without permission via the Windows registry mechanism, a feature that allows the installation of extensions alongside other applications, enabling third parties to opt-in users without their permission.
Those are now disabled by default in Chrome and a dialog pops up explaining the effect of the extension on the browser and any potential risks. The new feature also automatically disables any extensions installed using external deployment options in the past as well.
Mozilla, meanwhile, added a click-to-play feature beginning with Firefox 17 in November that prevents users from running out of date or vulnerable plug-ins or extensions. The move was designed to block exploits targeting these older versions of plug-ins such as Adobe Flash and Reader.
Jumcar is the name we have given to a family of malicious code developed in Latin America particularly in Peru and which, according to our research, has been deploying attack maneuvers since March 2012.
After six months of research we can now detail the specific features of Jumcar. We will communicate these over the following days. Essentially the main purpose of the malware is stealing financial information from Latin American users who use the home-banking services of major banking companies. Of these, 90% are channeled in Peru through phishing strategies based on cloning the websites of six banks.
Some variants of the Jumcar family also target two banks in Chile, and another in Costa Rica.
Percentage of the phishing attacks by countries
Fostering knowledge exchange among different generations of security researchers is maybe one of the best traits of a good security conference. Judging by its attendance, NoSuchCon can easily claim to be one of these. It's rare to see such a mix of young researchers and old gurus exchanging ideas and getting to know each other. Organized this year in Paris, NoSuchCon takes place in the premises of the Espace Oscar Niemeyer; admittedly, indeed a nice move putting a security conference within an art exposition center (congrats to the organizers :)) .
REDMOND, Wash.–The Microsoft Digital Crimes Unit has been spearheading botnet takedowns and other anti-cybercrime operations for many years, and it has had remarkable success. But the cybercrime problem isn’t going away anytime soon, so the DCU is in the process of building a new cybercrime center here, and soon will roll out a new threat intelligence service to help ISPs and CERT teams get better data about ongoing attacks. Dennis Fisher sat down with TJ Campana, director of security at the DCU to discuss the unit’s work and what threats could be next on the target list.
Threatpost: When you first started going out and doing the botnet takedowns, how much resistance did you see from people wondering why Microsoft was getting involved in this kind of thing?
Campana: Not much resistance at all, really. But we’re very careful about how we do this. We’re not just going out there shooting stuff. We walk in with a pile of legal documents. We’re asking for a judge to agree with what we found. We’ve tried really hard to be transparent with what we do. There are other groups out there that don’t have that same transparency. We’re an open book when it comes to the things we’re doing.
Threatpost: And this isn’t something that MIcrosoft does on its own. You’ve worked with other vendors on some of these actions. How important is that collaboration aspect of it?
Campana: Very important. We have a huge partnership program through our MAPP (Microsoft Active Protection Program) partners and that’s great. It’s bringing together people of a like mind. It’s been great to see that. I look forward to other companies doing this at some point.
Threatpost: Do you think that’s coming?
Campana: At the geek level, most of my counterparts in other companies want that to happen. We’re very lucky that we have tremendous support from the very top of the company on down for what we do. Without that top-down support, we wouldn’t be where we are. Folks at other organizations are working to get that. It’s necessary for this kind of work.
Threatpost: In the last few years the DCU has focused mainly on the botnet problem. Are there any other large threats looming out there that you’re looking at?
Campana: We’ve been working some on the problem of those phone scams where people call you up and tell you your PC is infected. That’s a huge problem. And we’ve done some work on scareware as well. But botnets are going to be the major issue for us to deal with I think. One thing that could become a bigger issue is mobile. It changes the way people are connected to the Internet. You’re connected to the Internet in a more permanent way. That’s the way computing is going, so the cybercrime would almost have to go that way, too. We’re also looking at some of the targeted attacks that are going after ad platforms. The problem of click fraud is a big one.
Threatpost: Once you do the takedown of a botnet and get through all of that, how much more is the DCU involved with what happens afterward?
Campana: It depends, but the idea is that we are working very hard behind the scenes before we go to the judge. We’re trying very hard to find the person who owns the servers we want to seize. When we go into a data center, that person isn’t there to defend himself, so we are working very hard to notify them that we took the servers. We want to find the person. We have to satisfy the judge that we did everything we could. We see a huge advantage in handing off a very nice package to law enforcement.
Threatpost: How is the Cyber Threat Intelligence Program you’re building going to work?
Campana: We’ve been testing it for about a year now. We’ve been sending emails once a week to the ISPs and CERTs we work with, and we looked at it and said, we’re a software company and a cloud provider, how can we marry those two to make this better. One of the huge assets for us is our scale. So we wanted to build something that scales. We’re signing up CERTs now for the new service. Right now the input for the service is only our MARS (Microsoft Active Response for Security) data. The second piece would be attack data from across the company. I want as much data as we can get.
Threatpost: How close it to being ready?
Campana: It works in the lab. But there’s a big difference between the lab and Internet scale. When you bring it into the real world, politics and other things get in the way.
Threatpost: One of the solutions to the botnet problem that people have talked about for years is having ISPs or security companies actively remove the malware from users’ machines. Is that a necessary step?
Campana: I want user consent. The user needs to take ownership of his own device. We have to balance what we could do and what we should do.
For every punch a hacker throws, there is a counter from a security company, and then, inevitably, the hacker adjusts again.
That’s what’s happening right now with the PushDo malware.
This week, Dell SecureWorks, Damballa Lab and Georgia Tech University combined on a research report exposing the fact that PushDo, a Trojan dropper largely responsible for Cutwail, one of the largest spam-producing botnets on record, was back. PushDo had returned en force with a domain generation algorithm that is capable of spinning up 1,380 .com domains every day in the event its two built-in command and control servers are offline.
The publication of the report clearly put the hacker group to work. Researchers at Seculert of Israel reported last night that a DGA found in two new variants of the malware generates .kz domains instead of .com, making the malware again difficult to detect and resilient against antimalware signatures.
“[DGA] is very effective against traditional and on-premises security solutions which are signature based,” Seculert CTO Aviv Raff told Threatpost. “There are already several malware families which have implemented this feature, and I expect to see more in the future.”
Raff said Seculert found the .kz domains on a number of hijacked websites serving the malware. The researchers took advantage of a misconfiguration on the attackers’ part to see a list of files on the folder of the PushDo variants. Two new executables, the new variants, were uploaded in the early afternoon on Wednesday to a server in Europe.
Dell SecureWorks and Damballa experts confirmed on Wednesday that the attackers were likely from Eastern Europe. While the new DGA domains are from Kazakhstan, that doesn’t necessarily mean the attacks originate from the former Russian state.
“Anyone can buy a .kz domain,” Raff said. “The interesting part though, is buying a .kz domain requires for the DNS server and the hosting to be at Kazakhstan.”
PushDo and Cutwail have been taken down numerous times by authorities. Each time it returns with new features making it more durable. The latest version, which researchers found in March, has infected anywhere between 175,000 and 500,000 machines, experts at Damballa and SecureWorks said. The malware is capable of detecting what security software is running on a compromised machine and is able of querying legitimate websites in addition to its C&C servers in order to blend in with regular Web traffic.
Researchers were able to sinkhole some of the command and control .com domains generated by the DGA and recorded more than 1.1 million unique IP addresses trying to connect to the sinkhole–an average of 35,000 to 45,000 daily requests were made.
DGA periodically generates and then tests new domain names and determines whether a C&C responds. This technique hinders static reputation servers that maintain lists of C&C domains and enables hackers to bypass signature-based and sandbox protections. It also cuts down the need for a large command and control infrastructure, lessening the chances it is exposed to researchers and the authorities. This version of PushDo was generating between nine- and 12-character dot-com domains.
In an Oslo Freedom Forum workshop offering advice to free speech advocates on how to better secure their devices against government surveillance, security researcher Jacob Appelbaum uncovered a new strain of malware with backdoor capabilities on the Mac machine of an Angolan activist attending the event.
Appelbaum is probably best known for his work with the online anonymity enabling Tor Project and for his affiliation with and various legal battles regarding the 2010 and 2011 publications of U.S. State Department cables by the online whistle-blower, Wikileaks. Appelbaum was also the first researcher to publicly detail the attack on the certificate authority Comodo.
F-Secure’s Mac analyst, known simply as “Brod,” is still in the process of investigating the malware, but his fellow F-Secure researcher Sean Sullivan notes that the sample is signed with a legitimate Apple Developer ID. It launches from the users and groups folder and dumps screenshots into another folder called “MacApp.”
The Trojan appears capable of number of fairly simple spying functions such as taking screenshots and uploading .zip files to name a couple. It also connects to two command and control servers, one in the Netherlands and one in France. At the time of his publication yesterday morning, Sullivan wrote that the French C&C server would not resolve and the Dutch one was informing him that he was forbidden from accessing it.
On Twitter, Sullivan and Appelbaum discussed that the Trojan appeared to be related to an older piece of Mac malware called HackBack.
Appelbaum claims that the Angolan activist’s Mac was compromised in a spear-phishing attack.
Apple has since revoked the Developer ID with which the malware is signed, according to a tweet sent by Appelbaum.
According to VirusTotal, one of 46 antivirus vendors is detecting the threat. The vendor is F-Secure, and they are identifying it as Backdoor: OSX/KitM.A. (SHA1: 4395a2da164e09721700815ea3f816cddb9d676e).
Mozilla has tapped the brakes on its plans to block third-party cookies by default in the Firefox browser.
Test versions of Firefox 22, scheduled for a June release, were supposed to include a patch that blocked third-party cookie drops by default. However, Mozilla CTO Brendan Eich said yesterday those plans have been temporarily put on hold for more testing.
Mozilla has been promoting this privacy-conscious decision for months, most publicly at the RSA Conference in February. Chief privacy officer Alex Fowler commented during a panel discussion about the practices of advertisers, data brokers and others who monitor and profit from users online behaviors. In particular, Fowler concentrated on the practice of third parties dropping cookies on users’ machines without the user’s consent and from sites the user has not visited. The policy, Fowler said, would state that in order for cookies to be placed on a user’s computer, the user must interact with the site, not third-party content on another site. Apple’s Safari browser blocks third-party cookies by default, and this is the model Mozilla is following.
This week’s announcement by Eich backpedals a little on Mozilla’s stance.
“The idea is that if you have not visited a site (including the one to which you are navigating currently) and it wants to put a cookie on your computer, the site is likely not one you have heard of or have any relationship with,” Eich wrote on his blog. “But this is only likely, not always true.”
Eich said Mozilla will refine its patch to address false positives and negatives. Eich offered an example where a user could visit a site that would embed a cookie from another site it owns as a false positive. As for false negatives, he said just because a user visits a site once should not be consent for that site to drop a cookie and track the user’s activities.
“Our challenge is to find a way to address these sorts of cases,” Eich said. “We are looking for more granularity than deciding automatically and exclusively based upon whether you visit a site or not, although that is often a good place to start the decision process.”
Eich said Mozilla will ship a refined version of the patch with blocking on by default.
“Our next engineering task is to add privacy-preserving code to measure how the patch affects real websites,” he said. “We will also ask some of our Aurora and Beta users to opt-in to a study with deeper data collection.”
This week, the patch, Eich said, moved to the Firefox 22 beta release, but it is not on by default. Users would have to opt in; the patch is on by default in the Aurora release. Eich said false positives can hamper the user experience on sites they visit, while false negatives enable tracking where it’s not wanted.
“We have heard important feedback from concerned site owners. We are always committed to user privacy, and remain committed to shipping a version of the patch that is ‘on’ by default,” Eich said. “We are mindful that this is an important change; we always knew it would take a little longer than most patches as we put it through its paces.”
Privacy advocates such as the Electronic Frontier Foundation have praised Mozilla’s intention to follow Apple’s lead here, yet recognized that making a change such as this could affect the bottom line of many advertisers.
Other privacy-related tracking measures such as Do Not Track are also political hot potatoes between privacy advocates and advertisers. Microsoft, for example, ships Internet Explorer 10 with DNT turned on by default, a signal to sites that the user does not want to be tracked. Some sites, however, will ignore the signal, and groups such as the Apache HTTP Server Project argue that Microsoft’s decision does not indicate the user’s wishes. Mozilla’s Fowler, meanwhile, said fewer than 15 percent of Firefox users send the DNT header.
“People are asking for a different level of privacy on your service, and you have to listen to that. It’s critical to the business and web ecosystem,” Fowler said at RSA. “At Mozilla, we also do online advertising campaigns and email outreach. We try to think about the tracking we impose on users, so we are making an effort to work with vendors who are willing to respect the DNT header. It’s not a condition, but we think it’s important for organizations advocating for this that we spur service providers to understand and respect it.”
Now cybercriminals from Brazil are also interested in Bitcoin currency. In order to join the horde of phishers on the lookout for the virtual currency they have applied their best malicious technique: malicious PAC on web attacks, and phishing domains.
The malicious usage of PAC (Proxy Auto-Config) among Brazilian black hats is not something new weve known about it since 2007. Generally, these kind of malicious scripts are used to redirect the victims connection to a phishing page of banks, credit cards and so on. We described these attacks in detail here. In 2012 a Russian Trojan banker called Capper also started using the same technique. When its used in drive-by-download attacks, it becomes very effective.
After registering the domain java7update.com, Brazilian criminals started attacking several websites, inserting a malicious iframe in some compromised pages:
A new malware campaign has been hitting Pakistan hard over the last few months and after a little e-sleuthing, it appears the not-so-stealthy attacks have been originating from nearby India and exploiting a certificate to run its binaries.
Security firm Eset has a full rundown of the campaign today on its WeliveSecurity.com blog by malware researcher Jean-Ian Boutin, including an array of details involving how the attack has been executed and the types of payloads being deployed on unsuspecting Pakistanis’ computers.
This campaign relies on the exploitation of a bogus, digitally signed certificate from the Indian company Technical and Commercial Consulting Pvt. Ltd. Initially issued in 2011 and revoked for files used after March 2012. Still though the cert was still used to sign more than 70 different malicious binaries on and off from that March until September of that year.
The malware uses two vectors – the first is a well-known Word document vulnerability, CVE-2012-0158, that’s been used in everything from the Red October campaign to a bevy of attacks against Tibetan and Uyghur users as of late. The other vector spread Word and PDF files that once opened, “downloads and executes additional malicious binaries.” Some of those files are disguised as “pakistandefencetoindiantopmiltrysecreat.exe” and “pakterrisiomforindian.exe,” according to the blog post.
Payloads are set up to glean data – screenshots, keystrokes, documents in the computer’s trash – from users’ computers and in turn send them to the attackers’ servers. Interestingly enough, as Boutin notes, the information is being uploaded to the attacker’s computer unencrypted, so it’s easy to see what exactly is being transferred.
The blog also notes a number of Indian connections, including the mysterious Indian code signing certificate, references to Indian culture in the binaries and signing timestamps between 5:06 and 13:45, consistent with eight hour shifts worked in India.
An accompanying graph in the blog entry suggests that while other nations are being hit by the campaign, it’s largely affecting Pakistan, with 79 percent of the targets affecting that South Asian country.
A similar type of malware, Redpill, was found hijacking users in India last month. That campaign also stole screenshots, in addition to bank account credentials and email information and was the second coming of a malware strain that made its first appearance in 2008.
Boutin’s full research on the malware targeting Pakistan is being presented at the Caro Workshop, a security conference in Bratislava, Slovakia tomorrow. For more on his research, head to ESET’s blog.
- According to KSN data, Kaspersky Lab products detected and neutralized 1 345 570 352 threats in Q1 2013.
- A total of 22,750 new modifications of malicious programs targeting mobile devices were detected this past quarter - that’s more than half of the total number of modifications detected in all of 2012.
- Some 40% of the exploits seen in the first quarter of this year target vulnerabilities in Adobe products.
- Nearly 60% of all malicious hosts are located in three countries: the US, Russia, and the Netherlands.
Four times since 2008, authorities and technology companies have taken the prolific PushDo malware and Cutwail spam botnet offline. Yet much like the Energizer Bunny, it keeps coming back for more.
In early March, researchers at Damballa discovered a new version of the malware that had adopted a domain generation algorithm (DGA) in order to not only help it avoid detection by security researchers, but to add resiliency.
Cutwail has historically been one of the largest spam botnets, hoarding millions of compromised computers that have sent billions of spam messages through the years. The malware is installed on compromised machines by the PushDo dropper Trojan.
This version of PushDo has infected anywhere from 175,000 to 500,000 bots, researchers said. Past versions have been able to collect system data in order to determine which antivirus software and firewall processes were running on a compromised machine. The latest iteration, in addition to its DGA capabilities, can also query legitimate websites such as universities and ISPs in order to blend in with regular web traffic and trick sandbox-type analyses.
The added domain generation algorithm capabilities enable PushDo, which can also be used to drop any other malware, to further conceal itself. The malware has two hard-coded command and control domains, but if it cannot connect to any of those, it will rely on DGA to connect instead. This capability was only recently discovered.
“On the technical side of writing (DGA) code, there are enough examples out there that the average hacker could do that part,” said Brett Stone-Gross, Counter Threat Unit Senior Security Researcher, Dell SecureWorks. “The more difficult thing is having the infrastructure set up and the organization to know you need new domains set up and registered. This takes more organization than hackers in the past have demonstrated and shows how sophisticated some botnet operators are getting with business plans and having the commitment to follow a plan.”
Researchers at Dell SecureWorks, Georgia Tech and Damballa were able to sinkhole some of the command and control domains generated by the DGA and recorded more than 1.1 million unique IP addresses trying to connect to the sinkhole–an average of 35,000 to 45,000 daily requests were made.
While most traditional malware carry built-in C&C domain names, this tactic becomes moot if researchers get their hands on the binary and block or sinkhole it. As a counter-tactic, malware writers began dynamically sending regularly updated configuration lists with new C&C server information, yet this was vulnerable to interception as well.
DGA is the latest countermeasure. These algorithms will periodically generate and then test new domain names and determine whether a C&C responds. This technique hinders static reputation servers that maintain lists of C&C domains and enables hackers to bypass signature-based and sandbox protections. It also cuts down the need for a large command and control infrastructure, lessening the chances it is exposed to researchers and the authorities. This version of PushDo generates between nine- and 12-character dot-com domains.
PushDo joins Zeus and the TDL/TDSS malware families in using DGA. Damballa learned from passive DNS analysis it conducted that PushDo was generating more than 1,300 unique domain names every day, most of these lasting just a day, cutting into the effectiveness of blacklisting operations.
“This one is very similar to Zeus as far as effectiveness,” said Jeremy Demar, Senior Threat Analyst, Damballa. “Zeus’ primary communications method was peer-to-peer. If it’s in a corporate environment that blocks peer-to-peer, it falls back to DGA. This is very similar in capabilities and effectiveness.”
Among the 1.1 million IPs connecting to the PushDo DGA domains were a number of government organizations, government contractors and military networks.
“It’s a relatively small population on the interesting list as far as numbers go, but because of the level of sensitivity of those organizations, we made sure to let everyone know,” Stone-Gross said, adding that a takedown similar to some of the previous efforts requires a lot of legal and technical cooperation. Both companies hope that awareness of this issue will lead to updates of endpoint protection technologies.