Monthly archives for November, 2016

The cloud’s missing link: Monitoring and management

As cloud deployments become bigger and more complex, monitoring becomes more important

Monitoring and management of cloud-based production environments is often an afterthought.
Why? Perhaps there is so much to think about — security, cost management, and governance —
that monitoring and management fall by the wayside. Or perhaps IT believes it simply can use
clouds’ native tools.

Monitoring and management are important to any data center, whether in the cloud or not.
These tools react to the real-time data gathered from system operations — storage, compute,
applications, databases, and so on — as well as respond to trends within the data.

[ The InfoWorld review: Azure Machine Learning is for pros only | Get a digest of the day’s
top tech stories in the InfoWorld Daily newsletter. ]
For example, say the performance of the database is falling behind the requirements of the
applications. Cloud monitoring and management tools would notify the cloud admins of this
situation, and either the admin or an automated process can take corrective action, such as
launching more machine instances to increase the database’s performance.

However, the most powerful benefit of cloud monitoring and management is the ability to
watch trends. This means gathering many data points over time and drawing conclusions as to
what they mean. For example, spotting accelerated use of storage services would suggest
there could be imminent performance and capacity problems. From there, proactive corrective
action could be taken, whether automated or manual.

Now that you’re sold on the benefits of cloud monitoring and management, how do you go about
choosing the right tools? There are three basic criteria:

1、Your monitoring and management tools should be cloud-agnostic. They need to span all he
clouds that you’re using, both private and public. If you have a tool that’s native to one
cloud, that’s no good.
2、Make sure your tool can gather system data over time, and then make sense of the data. In
other words, it should have data analytics capabilities.
3、Your tools should be able to take automated corrective action. This means you can
preprogram responses to common issues, so your clouds become self-healing.

AdultFriendFinder network hack exposes 412 million accounts

Almost every account password was cracked, thanks to the company’s poor security practices. Even “deleted” accounts were found in the breach.

A massive data breach targeting adult dating and entertainment company Friend Finder Network has exposed more than 412 million accounts.

The hack includes 339 million accounts from, which the company describes as the “world’s largest sex and swinger community.”

That also includes over 15 million “deleted” accounts that wasn’t purged from the databases.

On top of that, 62 million accounts from, and 7 million from were stolen, as well as a few million from other smaller properties owned by the company.

The data accounts for two decades’ worth of data from the company’s largest sites, according to breach notification LeakedSource, which obtained the data.

The attack happened at around the same time as one security researcher, known as Revolver, disclosed a local file inclusion flaw on the AdultFriendFinder site, which if successfully exploited could allow an attacker to remotely run malicious code on the web server.

But it’s not known who carried out this most recent hack. When asked, Revolver denied he was behind the data breach, and instead blamed users of an underground Russian hacking site.

The attack on Friend Finder Networks is the second in as many years. The company, based in California and with offices in Florida, was hacked last year, exposing almost 4 million accounts, which contained sensitive information, including sexual preferences and whether a user was looking for an extramarital affair.

ZDNet obtained a portion of the databases to examine. After a thorough analysis, the data does not appear to contain sexual preference data unlike the 2015 breach, however.

The three largest site’s SQL databases included usernames, email addresses, and the date of the last visit, and passwords, which were either stored in plaintext or scrambled with the SHA-1 hash function, which by modern standards isn’t cryptographically as secure as newer algorithms.

LeakedSource said it was able to crack 99 percent of all the passwords from the databases.

The databases also included site membership data, such as if the user was a VIP member, browser information, the IP address last used to log in, and if the user had paid for items.

ZDNet verified the portion of data by contacting some of the users who were found in the breach.

One user (who we are not naming because of the sensitivity of the breach) confirmed he used the site once or twice, but said that the information they used was “fake” because the site requires users to sign up. Another confirmed user said he “wasn’t surprised” by the breach.

Another two-dozen accounts were verified by enumerating disposable email accounts with the site’s password reset function. (We have more on how we verify breaches here.)

When reached, Friend Finder Networks confirmed the site vulnerability, but would not outright confirm the breach.

“Over the past several weeks, FriendFinder has received a number of reports regarding potential security vulnerabilities from a variety of sources. Immediately upon learning this information, we took several steps to review the situation and bring in the right external partners to support our investigation,” said Diana Ballou, vice president and senior counsel, in an email on Friday.

“While a number of these claims proved to be false extortion attempts, we did identify and fix a vulnerability that was related to the ability to access source code through an injection vulnerability,” she said.

“FriendFinder takes the security of its customer information seriously and will provide further updates as our investigation continues,” she added.

When pressed on details, Ballou declined to comment further.

But why Friend Finder Networks has held onto millions of accounts belonging to customers is a mystery, given that the site was sold to Penthouse Global Media in February.

“We are aware of the data hack and we are waiting on FriendFinder to give us a detailed account of the scope of the breach and their remedial actions in regard to our data,” said Kelly Holland, the site’s chief executive, in an email on Saturday.

Holland confirmed that the site “does not collect data regarding our members’ sexual preferences.”

LeakedSource said breaking with usual tradition because of the kind of breach, it will not make the data searchable.


Nvidia launches virtual GPU monitoring, analytics

To date, virtual GPU environments have been harder to track relative to physical infrastructure.

Nvidia will launch monitoring software that will better track usage and optimization for its virtual graphical processing environments.

The graphics processor company has been pushing into high performance computing, enterprise servers, and virtual desktop systems.

According to Nvidia, the latest version of GRID, which will land August 26, will include GRID Monitoring, an analytics system to track graphics virtualization.

To date, virtual GPU environments have been harder to track relative to physical infrastructure. Now GRID Monitoring will track virtual GPU types, performance, and usage across companies and clusters.

GRID Monitoring includes:

  • Discovery tools to query virtual GPUs.
  • Insights on properties such as name, displays supported, maximum resolution, frame buffer status, and license status.
  • Utilization reports that track engines for 3D, encoding, and decoding.
  • GRID Monitoring can be used with native monitoring tools, via virtualization and management consoles from companies like VMware and Citrix and custom applications.


Odinaff Trojan attacks banks and more, monitoring networks and stealing credentials

New Trojan is suspected to be linked to the Carbanak hacking campaign — and is potentially very lucrative for criminals, warn Symantec researchers.

A previously undocumented banking Trojan is targeting financial institutions across the globe and is being used by cybercriminals to spy on networks of compromised organisations and stealthily defraud them of funds.

The Odinaff trojan has been active since January this year, carrying out attacks against organisations operating in the banking, securities, trading, and payroll sectors, as well as those which provide support services to these industries.

According to cybersecurity researchers at Symantec, the Trojan contains custom-built malware tools purposely built for exploring compromised networks, stealing credentials, and monitoring and recording employee activity in attacks which researchers say can be highly lucrative for hackers — and bear the hallmarks of the Carbanak financial Trojan.

Those behind Odinaff are using a variety of techniques to break into the networks of targeted organisations: the most common method of gaining access is tricking employees into opening documents containing malicious macros.

While macros are turned off by default in Microsoft Word, the recipient can opt to enable them — which they’re encouraged to do by a malicious attachment — at which point the Odinaff Trojan will be installed on their system. One way a user can avoid being infected in this way is simply to keep the default setting of not allowing macros to be disabled.

Another common technique involves the use of password protected .RAR archive files, which trick the victim into installing Odinaff. While cybersecurity researchers haven’t been able to determine how these malicious documents and links are distributed by cybercrminals, it’s believed spear-phishing is the main method of deployment.

Odinaff is a sophisticated Trojan which is capable of taking screenshots of infected systems between every five and 30 seconds which it sends back to a remote command-and-control server. The Trojan also downloads and executes RC4 cipher keys and can issue shell commands.

Once the Odinaff Trojan has performed the initial compromise of the infected machine, a second piece of malware known as Batel is installed. This second malware infection is capable of running payloads solely in the memory, effectively enabling it to stealthily run in the background.
Given the specialist nature of these attacks, Odinaff requires large amount of manual intervention, with those involved carefully managing attacks and only downloading and installing new tools when required, suggesting that the group behind it is sophisticated and well resourced.

Indeed, cybersecurity researchers suspect that Odinaff is in fact related to the Carbanak hacking group which has stolen over one billion dollars from banks since first appearing in 2013. Researchers note that one of the IP addresses used by Odinaff has been mentioned in connection to the Oracle Micros breach, an attack which saw the compromise of hundreds of point-of-sale devices.

In addition to this, three Odinaff command and control IP addresses have been connected to previous Carbanak campaigns, which saw banks in 30 countries being targeted by criminal actors suspected to originate from Russia, Ukraine, Europe, and China.

While many cyberattacks against banks are limited by region — for example, Zeus Trojan variant Panda specifically targeted Brazil in the run-up to the country hosting the Olympic Games — the fact that like Carbanak, Odinaff is targeting financial institutions across the entire globe could ultimately mean the two types of attack are related.

Banks across the world have been attacked with this Trojan, but it’s banks in the US find themselves most targeted by Odinaff, followed by Hong Kong, Australia, and the UK.

The Odinaff group is just the latest in a line of cybercriminal groups who’ve realized that while it’s — in theory — much harder to infiltrate the networks of a bank, the potential payoff can be very, very lucrative. The GozNym banking Trojan and the data-stealing Qadars Trojan malware are other examples of how hackers are trying to break into banks.


ISPs against broadband speed regulation

ISPs say nearly every factor affecting individual users’ speeds are out of their control, including the type of network technology, backhaul capacity, end-user hardware, distance to the exchange, and weather.

Australia’s internet service providers (ISPs) have spoken out against the proposal by the Australian Competition and Consumer Commission (ACCC) to force them to report accurate broadband speed information to customers, saying many of the factors affecting speed fluctuation are out of their control.

The ACCC a year ago suggested monitoring broadband services in an effort to encourage competition between fixed-line broadband retail service providers (RSPs) and aid consumers in making more informed purchasing decisions, with a discussion paper released in July.

The submissions from Optus, TPG, and Telstra were all against the proposal, while the National Broadband Network (NBN) company favoured it.

TPG’s submission [PDF] said the regulator should not step into an area already taken care of through competition.

In addition, RSPs should not be forced to provide information on speeds because many of the factors affecting speeds are out of their control, TPG said.

“There are a number of factors that affect an end user’s perception of ‘speed’, including the type of technology used, backhaul capacity, end-user hardware and connection method, source of content, distance to exchange, weather, interference, quality of the connection (cable, copper wires). Many of these, as noted in ACCC’s Information Paper dated July 2011, are beyond the control of RSPs,” TPG pointed out.

“Consumers are, in many instances, not aware of the extent of those issues and often will not understand that local issues, such as underlying computer and network resource consumption that may or may not be known (eg, virus traffic or unknown download traffic), and third-party issues such as congestion at, or a poor quality of, data source, can be affecting their perception.

“These factors limit a RSP’s ability to provide representations of actual broadband speed that is likely to be attainable by consumers at their premises.”

Telstra’s submission [PDF] also pointed toward the many factors that affect a user’s speed, such as “the performance of devices, Wi-Fi or cabling within a consumer’s premises, the line speed of the broadband service, the capacity of the backhaul network to cope with changes in aggregate customer demand at different times of the day, and the performance of remote servers and their connections to the internet”, making it difficult for RSPs to provide accurate information.

“These factors are all variable and many of them are outside the control of the RSP,” Telstra said.

“Consequently, it is not possible to accurately forecast a specific speed for any individual customer for any specific time. The best that can be done is to forecast a probability that the actual speed experienced will be within a certain range of speeds.”

Telstra added, however, that ACCC guidance on speed claims should be updated to allow more flexibility in speed reporting, as well as minimum expectations for such information.

While Optus’ submission [PDF] agreed with the ACCC’s objective of ensuring consumers receive clearer information concerning speeds, it said this would likely not be achieved through an increase to the regulatory burden on ISPs.

“There needs to be better recognition of the different factors that influence speed or performance, many of which are outside an ISP’s control. A critical first step to delivering this improved transparency is to understand why performance information is largely absent from the market today,” Optus said.

All that would result from regulation on the matter is ISPs constantly risking breach of the rules thanks to factors outside of their control affecting speeds, according to Optus.

“Given the technical limitations of legacy-based services, where the length of the copper runs and quality of the copper means that performance can differ on a premise-by-premise basis, it is not surprising that ISPs are reluctant to advertise speeds,” Optus said.

“The benefits to be gained from providing performance information are likely to be outweighed by the risks of breaching the ACCC’s guidelines and facing enforcement action and reputational damage.”

Optus also pointed out that NBN’s high CVC charge forces ISPs to “balance service performance and price to retail customers”.

In NBN’s own submission [PDF], it said it “strongly agrees” that consumers should be provided with better broadband speed information in addition to download quotas and pricing, calling it a “win-win” for both consumers and the industry.

“Network operators are responsible for the performance of their part(s) of the underlying network, and provide relevant information about that network to RSPs. The RSP is then best placed to provide end users with performance information, including service speed, regarding their retail products,” NBN argued.

“For RSPs and industry, providing clear and accurate information about speed will result in greater customer satisfaction, lower churn, and reduced cost in dealing with dissatisfied customers. It is also an investment which will likely increase brand loyalty and arguably mitigate regulatory risk.”

NBN suggested that speed information should also be provided on mobile services due to “increasing convergence” in the industry. Telstra disagreed with this in its submission, calling Australia’s mobile speeds “world class”.

The ACCC last month published submissions on the matter from the telco industry with the Australian Communications Consumer Action Network (ACCAN), supporting the proposal that consumers be provided with better information on fixed broadband speed and performance.

“We fully support the ACCC’s investigation into this issue, and urge the commission to implement guidelines and other measures that will result in clearer information for consumers,” ACCAN CEO Teresa Corbin said.

“ACCAN asserts that consumers should have access to information which helps them compare services and describes how the service will work for them.

“The proposed Broadband Performance Monitoring and Reporting Program, which aims to test service performance, would also help to support and verify the speed claims made by RSPs. Information on any prioritisation over the network that occurs should also be presented to consumers.”

In a joint submission [PDF], Communications Alliance and the Australian Mobile Telecommunications Association (AMTA) agreed that while more information on broadband speeds should be provided to consumers, it is questionable as to how this could be achieved realistically.

“Industry strongly believes that it is important to focus on principles, given that it is not realistic to make deterministic statements about speed and performance for individual customers,” the joint submission said.

“The market and technologies are also highly dynamic. Any attempt to prescribe a solution will quickly become outdated and there is a real risk that any prescriptive approach would stifle innovation in the industry.

“In this regard, most industry participants remain deeply sceptical as to whether the ACCC’s proposal for a broadband quality monitoring regime in Australia would achieve its objectives.”

AMTA and Comms Alliance added that for smaller ISPs, adhering to such a regime could have anti-competition effects.

Instead, the two bodies recommended that Comms Alliance create an industry guideline on broadband performance in collaboration with the telco industry, the ACCC, AMTA, the Australian Communications and Media Authority (ACMA), the Department of Communications, and ACCAN.


This new Mac attack can secretly monitor your webcam, microphone

A new app aims to prevent malware from recording video calls.

In recent years we’ve seen malware that targets webcams and microphones in an effort to secretly record what a person says and does.

Even the NSA has developed code that remotely switches on a person’s webcam.

But things are different when it comes to Mac malware, because each Apple laptop has a hard-wired light indicator that tells the user when it’s in use. At least you know you’re being watched.

(Image: Patrick Wardle)

That could change with a new kind of webcam piggyback attack, according to research by Synack’s Patrick Wardle, which he will present Thursday at the Virus Bulletin conference.
After examining a number of malware samples, Wardle believes that attackers can easily take advantage of the light indicator in most modern Macs to mask the malware from secretly recording your phone calls and video chats.

The “attack” works like this. The malware quietly monitor the system for user-initiated video sessions — like FaceTime or Skype video calls — then piggybacks the webcam or microphone to covertly record the session. Because the light is already on, there’s no visible indications of this malicious activity, which lets the malware record both the audio and video without risk of detection.

After all, it’s the phone and video calls that hackers and nation states want to hear, not the regular ramblings of a person sitting at their desk throughout the day.

Wardle told me in an email that when a person legitimately uses their webcam or microphone, it’s typically for more sensitive things, such as a journalist talking to a source, or an important business meeting with an executive, or even a person’s private FaceTime conversation with their partner — all of which could be invaluable for surveillance.

Enter his new tool, Oversight, which aims to block rogue webcam connections that piggyback off legitimate video calling apps, and alerts you when your microphone is in use.

If malware tries to piggyback off a webcam session, the app will alert the user — allowing them to block it. Wardle said that the tool will log the process, allowing security experts or system administrators to take a closer look.

The good news is that Wardle said he’s not aware of any Mac malware that exists to do this, but he noted it isn’t difficult to implement.

“It’s just a few lines [of code], and it doesn’t require any special privileges,” he said. “Currently, Mac malware such as Eleanor could easily implement this capability with this code.”

Wardle has put the app up for free on his website.


Internet usage monitoring becomes the norm in Brazil

Most organizations in the country monitor or block access to content during working hours

Monitoring staff Internet usage has become a common practice in Brazilian organizations, according to a study by the Brazilian Steering Committee.

The likelihood that a Brazilian company may be monitoring Internet browsing history of its employees increases according to its size.

According to they report, 38 percent of companies with up to 49 staff do so, with the percentage going up to 58 percent at firms employing 50-249 people and 73 percent at organizations with more than 250 staff.

Some 43 percent of the companies surveyed also prevent staff from accessing certain types of online content.

When it comes to blocked content, social networks top the list: such websites are blocked by 81 percent of large companies, while 48 percent of organizations employing less than 50 people also deny access to the likes of Facebook and Twitter.

At these organizations, websites with pornographic content top the list of unauthorized URLs (73 percent of employers block such sites) followed by games (65 percent), file downloads (49 percent), entertainment portals, news or sports websites (43 percent), personal email (37 percent) and communication services such as instant messaging (36 percent).



Monitoring SSL traffic now everyone’s concern: A10 Networks

As the uptake of SSL grows, Tim Blombery, systems engineer at A10 Networks, said threat actors are increasingly leveraging SSL-based encryption to hide malicious activity.

As usage of Secure Sockets Layer (SSL) moves beyond the login page or banking website and out into the wider web, Tim Blombery, Systems Engineer at security firm A10 Networks, believes monitoring SSL traffic should now be a concern for almost every company.

Blombery believes that encryption is necessary to protect online data in transit from being compromised, but noted threats are always evolving. With over half of the traffic on the internet now encrypted with SSL, he said bad actors are leveraging SSL-based encryption to hide malicious activity from existing security controls and technology.

Consequently, Blombery said this means enterprises have lost the ability to look at the traffic that is traversing their network, opening themselves up to attack.

“This is becoming an increasing vector for attacks and compromises of networks,” he said. “I think SSL offers a very pertinent threat at the moment.”

Blombery said attacks often arrive via the likes of a Gmail account, which is encrypted to the desktop, with someone unwittingly opening a file containing a cryptolocker.

“Off they go, they’ve compromised that particular system and potentially the entire network,” he said. “Having SSL visibility is vital for Australian enterprises and I think they’re just starting to get that idea.”

As it often takes a breach for someone to jump on board with a specific security solution, Blombery said more and more Australian businesses are starting to become aware of the need to monitor SSL traffic because they have either been affected or heard of someone who has been affected by this sort of attack.

“There are serious breaches regularly, but everyone’s breach is serious for them,” he said. “Even the smallest of companies needs to be security conscious these days.”
The hardware for SSL inspection is a device sitting on the perimeter taking the SSL offload, the company said, which decrypts traffic and then passes it on to the firewall or IPS.

“Once those devices do their job, they hand the traffic back to our device to re-encrpyt and send on to the destination — that’s traffic coming in or out,” Blombery said.
With mandatory breach reporting laws not yet in place in Australia, Blombery noted that even if there was an abundance of breaches due to SSL traffic not inspected, the public might not even know about it.

“For the individuals affected, you certainly want to know if your account or any account is being breached — you should be informed,” he added.

“A lot of people silly enough have the same password for everything or the same subset of passwords, so if a company you’re working with has been breached and you don’t have that visibility, then potentially all of your online identity can be compromised.”

A10 Networks recently completed its first acquisition, scooping up cloud application delivery firm Appcito.

“It really expands us into not just the cloud but as a cloud native company as well,” Blombery said. “Appcito brings load balancing as-a-service, in the cloud functionality that we’ll be able to tie in with our own existing infrastructure based functionality, and allow for common policy to support the applications whether they’re in the datacentre or in the public cloud somewhere.”

Blombery said Appcito is already embedded within A10 and are essentially the cloud decision of the organisation.


System Requirements

Both OsMonitor Server and Client can work on Windows XP, Windows Server 2003/08/12/2016, Windows 7, Windows 8/8.1, Windows 10. Include 32 bit and 64 bit.

Customer Review

We are now using your monitoring software, OsMonitor. It is a great software, we are able to block non-business website, monitor activities of our users, website visited and even snap shots. Majority of our need is provided by your software.