Monthly archives for January, 2018

With businesses fumbling, Singapore must take more care in data aspirations

Singapore government has been opening up user data access to ease information exchange and business transactions, but it should observe some caution as major organisations continue to slip up over security.

The Singapore government has been opening up access to citizen data to facilitate business transactions and information exchange, but with organisations fumbling over security including major global firms, it needs to take a step back and seriously assess the implications.

Its efforts were touted as essential in the country’s smart nation drive, where emphasis had been placed on providing data to spur the development of new citizen services and support data analytics and Internet of Things (IoT).

However, with businesses losing customer data to hackers and resorting to questionable practices in managing such data, the Singapore government needs to take a step back and evaluate potential risks it may be introducing to citizens in opening up access to their data.

My own concerns here were compounded when I recently changed banks in refinancing a home loan. After almost two frustrating months of back and forth as the bank, to which I was moving the loan, asked for supporting documents and other details, the transfer was finally approved and I was asked to make a visit to sign the final application form.

Only then was I informed that, as a condition of taking on the bank’s loan, I would have to purchase my home insurance coverage from its insurance partner–even though I already had an existing one from another provider. I also was required to buy a mortgage insurance policy from, again, its preferred partner.

When I expressed my displeasure that I wasn’t told about this before I started the application process and, more importantly, over the lack of consumer choice, the bank said I could still decide not to go ahead with the transfer. However, after spending two months pushing through the process, I certainly wasn’t ready to waste another two months sourcing for and signing up with another bank.

Also, buried inside the fine print, the bank stated it was able to share my personal data with the partner, which also had the option to use my data to send me marketing mailers, amongst others.

Presumably, because it is a major market player, the bank has included these service terms legally and within the confines of Singapore’s personal data protection laws.

If that is the case, consumers like me should have more cause for concern especially as more partnerships between different industry sectors are established–and more of our data face the possibility of being “cross-pollinated”.

Sign up as a bank customer and you’ll receive marketing messages from insurance companies you’re not a customer of, or buy a cup of latte and get a push message from an online furniture shop to purchase the chair you’re sitting on in the cafe.

And that’s just cause for minor irritation, compared to the heightened risk consumers then will face with their data increasingly exposed as more and more companies gain access to it.

As it is, even global companies including Uber and AXA Insurance have fallen prey to cyber hackers, resulting in customer data including those in Singapore being compromised. The Singapore government itself has suffered security breaches and uncovered lapses in its IT system control.

More worrying, cybersecurity still isn’t a top priority in boardroom discussions despite most companies in this region having experienced a security breach.

The Singapore government has assured that citizen data are safely protected across its agencies’ databases and systems, but that alone isn’t enough. With businesses sharing customer data amongst their partners, including the likes of Google that continue to collect information without consent, the government needs to also ensure access to citizen data serves only to facilitate a specific transaction and to the citizen’s benefit.

Organisations that are given access should have their systems and security measures audited, and they must adhere to guidelines on how citizen data should be managed and used.

Easing data access to improve service delivery is a good thing, but this should be carried out alongside strict policies to make sure businesses do not step out of line. One wrong step and citizens will lose confidence in the system, and Singapore’s smart nation drive will face a serious roadblock.


Notifiable Data Breaches scheme: Getting ready to disclose a data breach in Australia

Australia’s Notifiable Data Breaches scheme will come into force next month. Here is what it means and how it will affect organisations, and individuals, in Australia.


Australia’s Notifiable Data Breaches (NDB) scheme comes into effect on February 22, 2018, and as the legislative direction is aimed at protecting the individual, there’s a lot of responsibility on each organisation to secure the data it holds.

The NDB scheme falls under Part IIIC of the Australian Privacy Act 1988 and establishes requirements for entities in responding to data breaches.

What that means is all agencies and organisations in Australia that are covered by the Privacy Act will be required to notify individuals whose personal information is involved in a data breach that is likely to result in “serious harm”, as soon as practicable after becoming aware of a breach.

Tax file number (TFN) recipients, to the extent that TFN information is involved in a data breach, must also comply with the NDB.

In addition to notifying individuals affected, under the scheme, organisations must provide recommendations on how those affected should respond, as well as what to do now their information is in the wild. The Australian Information Commissioner, currently Timothy Pilgrim, must also be notified of the breach.

“The NDB scheme formalises an existing community expectation for transparency when a data breach occurs,” Pilgrim told ZDNet. “Notification provides individuals with an opportunity to take steps to protect their personal information, and to minimise their risk of experiencing harm.”

Intelligence agencies, not-for-profit organisations or small businesses with turnover of less than AU$3 million annually, credit reporting bodies, and political parties are exempt from the NDB.


In general terms, an eligible data breach refers to the unauthorised access, loss, or disclosure of personal information that could cause serious harm to the individual whose personal information has been compromised.

Examples of a data breach include when a device containing customers’ personal information is lost or stolen, a database containing personal information is hacked, or personal information is mistakenly provided to the wrong person.

An employee browsing sensitive customer records without any legitimate purpose could constitute a data breach as they do not have authorised access to the information in question.

The NDB scheme uses the phrase “eligible data breaches” to specify that not all breaches require reporting. An example of this is where Commonwealth law prohibits or regulates the use or disclosure of information.

An enforcement body — such as the Australian Federal Police (AFP), the police force or service of a state or a territory, the Australian Crime Commission, and the Australian Securities and Investments Commission — does not need to notify individuals about an eligible data breach if its CEO believes on reasonable grounds that notifying individuals would be likely to prejudice an enforcement-related activity conducted by, or on behalf of, the enforcement body.

Although not required all the time to disclose a breach, a spokesperson for the AFP told ZDNet the AFP would be complying with its notification obligations in all circumstances where there are no relevant exemptions under the Act.

If the Australian Information Commissioner rules the breach is not bound by the NDB scheme, organisations may not have to disclose it any further.

In addition, data breaches that are notified under s75 of the My Health Records Act 2012 do not need to be notified under the NDB scheme as they have their own binding process to follow, which also lies under the umbrella of the OAIC.


As the NDB dictates an objective benchmark in that the scheme requires a “reasonable person” to conclude that the access or disclosure is “likely to result in serious harm”, Melissa Fai, special counsel at Gilbert + Tobin, told ZDNet that in assessing the breach, an organisation should interpret the term “likely” to mean more probable than not — as opposed to merely possible.

“Serious harm” is not defined in the Privacy Act; but in the context of a data breach, serious harm to an individual may include serious physical, psychological, emotional, financial, or reputational harm.

Information about an individual’s health; documents commonly used for identity fraud including a Medicare card, driver’s licence, and passport details; financial information; and a combination of types of personal information — rather than a single piece of personal information — that allows more to be known about an individuals can cause serious harm.

In assessing the risk of serious harm, entities should consider the broad range of potential kinds of harm that may follow a data breach.


Agencies and organisations that suspect an eligible data breach may have occurred must undertake a “reasonable and expeditious assessment” based on the above guidelines to determine if the data breach is likely to result in serious harm to any individual affected.

If an entity is aware of reasonable grounds to believe that there has been an eligible data breach, it must promptly notify individuals at risk of serious harm and the commissioner about the breach.

The notification to affected individuals and the commissioner must include the following information: The identity and contact details of the organisation, a description of the data breach, the kinds of information concerned, and recommendations about the steps individuals should take in response to the data breach.

Entities have 30 days to conduct an assessment if they are unsure a breach meets the threshold of an eligible data breach. As soon as they believe a breach is an eligible data breach, they must notify individuals and the commissioner as soon as practicable.

The NDB scheme, however, provides entities with the opportunity to take steps to address a data breach in a timely manner, and avoid the need to further notify — including notifying individuals whose data has been somewhat exposed.


Failure to comply with the NDB scheme will be “deemed to be an interference with the privacy of an individual” and there will be consequences.

Gilbert + Tobin’s Fai explained that if an organisation is found to have hidden an eligible data breach, or is otherwise found to have failed to report an eligible data breach, such failure will be considered an interference with the privacy of an individual affected by the eligible data breach, and serious or repeated interferences with the privacy of an individual can give rise to civil penalties under the Privacy Act.

If the data breach that the organisation has failed to report is serious, or if the organisation has failed to report an eligible data breach on two or more separate occasions, Fai explained the OAIC has the ability to seek a civil penalty order against the organisation of up to AU$2.1 million, depending on the significance and likely harm that may result from the data breach.

“Of course, an organisation must also consider the risk of reputational damage to its brand and the commercial damage that might flow from that, particularly given the growing importance to an organisation’s bottom line of consumer trust in an organisation’s data management policies and processes and its ability to respond quickly, effectively, and with integrity to data breaches,” Fai added.

“The effects of the data breach on Equifax last year and its response are a case in point.”


The commissioner has a number of roles under the NDB scheme, which includes receiving notifications of eligible data breaches; encouraging compliance with the scheme, including by handling complaints, conducting investigations, and taking other regulatory action in response to instances of non-compliance; and offering advice and guidance to regulated organisations, and providing information to the community about the operation of the scheme.

The OAIC has published guidelines on the scheme, which also includes information on how to deal with the aftermath of a breach.


The federal government finally passed the data breach notification laws at its third attempt in February 2017.

A data breach notification scheme was recommended by the Joint Parliamentary Committee on Intelligence and Security in February 2015, prior to Australia’s mandatory data-retention laws being implemented.


According to Gilbert + Tobin, organisations should be at the very least getting familiar with what data they have, where it is kept, and who has access to it.

Assessing existing data privacy and security policies and procedures to make sure organisations are in a position to respond appropriately and quickly in the event of a data breach is also important.

“This should include a data breach response plan which works across diverse stakeholders in an organisation and quickly brings the right people — such as from IT, legal, cybersecurity, public relations, management, and HR — together to respond effectively,” Fai told ZDNet.

It wouldn’t hurt to continuously audit and strengthen cybersecurity strategies, protection, and tools to avoid and prevent data breaches.

“It is also important that an organisation’s personnel are aware of the NDB scheme. Personnel need appropriate training, including to identify when an eligible data breach may have occurred and how to follow an entity’s policies and procedures on what to do next,” Fai explained, adding this also extends to suppliers and other third-parties that process personal information on their behalf.


From May this year, the General Data Protection Regulation (GDPR) will come into play, requiring organisations around the world that hold data belonging to individuals from within the European Union (EU) to provide a high level of protection and explicitly know where every ounce of data is stored.

Organisations that fail to comply with the regulation requirements could be slapped with administrative fines up to €20 million, or in the case of an undertaking, up to 4 percent of the total worldwide annual turnover of the preceding financial year, whichever is higher.

The laws do not stop at European boundaries, however, with those in the rest of the world, including Australia, bound by the GDPR requirements if they have an establishment in the EU, if they offer goods and services in the EU, or if they monitor the behaviour of individuals in the EU.

The GDPR and the Australian Privacy Act share many common requirements, but there are a bunch of differences, with one crucial element being the time to disclose a breach.

Under the NDB scheme, organisations have a maximum of 30 days to declare the breach; under the GDPR, organisations have 72 hours to notify authorities after having become aware of it, unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons.

“In sum, if an Australian organisation is subject to the GDPR regime when it comes into effect in May this year, it needs to comply with its obligations under both regimes — although the two regimes contain different requirements, they are not mutually exclusive,” Fai added. “However, when it comes to data breaches, the high watermark of compliance is complying with the European regime.”


Any organisation that has purchased a security solution from a vendor knows that there is no silver bullet to completely secure an organisation.

“When it comes to data breaches, everybody is looking for something, a product, a process, a standard to prevent them completely. Unfortunately, this isn’t possible,” Symantec CTO for Australia, New Zealand, and Japan Nick Savvides told ZDNet.

“The first thing any organisation should do is understand that data breaches are not always preventable but they are mitigatable. Whether the data breach is a result of a compromise, malicious insider, or even a well-meaning insider accidentally leaking information, mitigations exist.”

Breaking the mitigations into three parts, Savvides said the first is dealing with a malicious attacker, the second is having information-centric security which he said applies to all scenarios, and the third mitigation category is the response plan.

“Most organisations don’t have very effective response plans for a data breach event. They might have a plan, but from what has been seen, the plans are generally very academic in nature rather than practical and often get bypassed in the case of a real event,” he explained.

“Organisations need to have processes for having incidents reported, a clear plan on who to involve, what process to follow, and a clear PR message.

Savvides said it is clear that users value transparency and clear speech rather than ambiguous legalese responses some organisations have produced.

“The commencement of the scheme is also a timely opportunity for organisations to take stock of the personal information they collect and hold, and how it is managed,” Pilgrim added. “By ensuring personal information is secured and managed appropriately, organisations can reduce the likelihood of a data breach occurring in the first place.”


Business must tone down its lust for big data

Privacy is a human right, and businesses need to remember that. So do governments.

It should come as no surprise that when key industry bodies write submissions to government consultations they’re self-serving. That’s what such lobby groups are for, right?

But in its submission to the current consultation on developing a national Digital Economy Strategy, the Australian Chamber of Commerce and Industry (ACCI) has gone beyond the usual bleatings about tax breaks, more “flexible” employment conditions, and a call for the the government to pay for the vocational training that businesses have long since stopped doing for themselves.

The ACCI wants more access to government data.

“Other governments, such as the United Kingdom and Canada, are ahead of the Australian government in terms of open data,” the ACCI writes in its submission [PDF].

“It is vital for businesses to have access to cohesive and complete public datasets. Datasets provided by the government that are more complete can, in turn, produce more accurate analytics, drive efficiencies and productivity in both the public and private sectors. If the range and breadth of raw government data increased, it would encourage digital integration between the public and private sector in Australia.”

Leaving aside the question of whether such access really is “vital” rather than merely “useful”, we should remember that it has been collected at taxpayers’ expense. Nowhere does the ACCI suggest that businesses might pay for it, however. Nor do they suggest a modest increase in the corporate tax rate. Of course.

The ACCI also calls for more system integration and interoperability between government agencies, so that “data would be requested from businesses only once … This could also be expanded to include data exchange capabilities between different international jurisdictions”.

There are barriers to overcome, of course. The ACCI identifies, for example, “legislative restrictions; a culture of risk aversion; lack of national leadership for data sharing and release; and, [that] the extent of productive linking and integration of datasets varies substantially across jurisdictions.”

Yet nowhere in the ACCI’s submission is the word “privacy”.

Nowhere is the phrase “data breach”.

That’s a worry, especially given the rapidly increasing ease but little-understood risks of the re-identification of supposedly de-identified data. Look no further than the recent re-identification of Australian health data that the government had published.

Privacy has taken a back seat to a lust for big data, according to Steve Wilson, vice-president and principal analyst with Constellation Research.

“Data scientists seem to think they can tick a privacy box and just get on with their analyses, perhaps because some consultant has said ‘privacy is a positive sum game’,” Wilson told ZDNet.

“Well no, privacy is about restraint. Privacy is mostly not about what we do with data, but what we don’t do with data. Privacy considerations mean that the risk of some of big data’s grand missions might just not be worth it.”

Wilson believes that some people have a “fetish for data and open data”, a largely unproven faith that all this data will lead to better evidence-based policy.

I agree.

“Big data is a dangerous, faith-based ideology. It’s fuelled by hubris, it’s ignorant of history, and it’s trashing decades of progress in social justice,” I wrote in 2014.

Since then little has changed, although it’s possible that the increasing public awareness of the scale and scope of data collection, and the expanding news coverage given to data breaches, may change that. Australia’s mandatory data breach notification laws come into force in just a few weeks. Wait and see.

“I don’t believe we have properly accounted for the privacy risks,” Wilson said.

“People have a human right to privacy, but I am not aware of any basic business right to obtain and process data.”


Carphone Warehouse fined £400,000 over 2015 data breach

The successful cyberattack exposed information belonging to millions of UK customers.

Carphone Warehouse has been slapped with a £400,000 fine for a data breach which led to the theft of information belonging to millions of customers.

On Wednesday, the UK Information Commissioner’s Office (ICO) said the fine is one of the largest issued in the data watchdog’s history.

In 2015, Carphone Warehouse said that a data breach had led to the theft and exposure of sensitive, personal information belonging to up to 2.4 million customers.

However, an investigation revealed that the security incident actually allowed unauthorized access to the data of over three million customers and roughly 1,000 employees.

The names, addresses, dates of birth, marital status and historical payment card details of customers were stolen alongside the names, phone numbers, postcodes, and car registration details of staff members.

The “sophisticated cyberattack” attracted the attention of the ICO, which said, “the personal data involved would significantly affect individuals’ privacy, leaving their data at risk of being misused.”

According to the agency, the UK mobile device retail giant’s approach to data security was inadequate and Carphone Warehouse had failed to take “adequate steps” to protect data — a serious breach of the Data Protection Act of 1998.

The data breach occurred as the cyberattackers were able to obtain login credentials through WordPress software which was not kept up-to-date and patched against vulnerabilities.

Carphone Warehouse also failed to keep other software up-to-date and did not carry out regular security tests. The company also did not identify and purge historic data properly — which means that the firm may have kept information on file without cause.

“A company as large, well-resourced, and established as Carphone Warehouse, should have been actively assessing its data security systems, and ensuring systems were robust and not vulnerable to such attacks,” said Information Commissioner Elizabeth Denham. “Carphone Warehouse should be at the top of its game when it comes to cybersecurity, and it is concerning that the systemic failures we found related to rudimentary, commonplace measures.”

There have been no reported cases of customer or staff information sales or abuse to date and the company has fixed “some” of the problems highlighted by the ICO.

However, with data protection regulations set to become tougher in the UK with the introduction of the General Data Protection Regulation (GDPR), which requires protection by design, Carphone Warehouse — and every other company in the country — will need to do better than fix “some” problems to avoid future fines.

“There will always be attempts to breach organizations’ systems and cyber-attacks are becoming more frequent as adversaries become more determined,” Denham added. “But companies and public bodies need to take serious steps to protect systems, and most importantly, customers and employees.”


240,000 Homeland Security employees, case witnesses affected by data breach

A database used by the Department of Homeland Security’s Office of the Inspector General has been confirmed as breached, affecting 247,167 current and former employees and individuals associated with the department’s previous investigations.

The United States Department of Homeland Security (DHS) has confirmed the breach of the DHS Office of Inspector General (OIG) Case Management System (CMS), affecting approximately 247,167 individuals employed by DHS in 2014, as well as individuals including subjects, witnesses, and complainants associated with DHS OIG investigations from 2002 through 2014.

DHS issued a statement on Wednesday after it sent the affected individuals a letter notifying them that they may have been impacted by a “privacy incident” relating to the CMS.

It held firm that the privacy incident did not stem from a cyber attack by external actors, and that “the evidence indicates that affected individual’s personal information was not the primary target of the unauthorised transfer of data”.

DHS said that on May 10, 2017, DHS OIG discovered an unauthorised copy of its CMS in the possession of a former DHS OIG employee as part of an ongoing criminal investigation.

“The privacy incident did not stem from a cyber attack by external actors, and the evidence indicates that affected individual’s personal information was not the primary target of the unauthorised exfiltration,” DHS wrote to those affected.

Notification letters were sent to all current and former employees who were potentially affected by the DHS Employee Data on December 18, 2017, and said that due to technological limitations, DHS is unable to provide direct notice to the individuals affected by the Investigative Data. It has asked those individuals to reach out to the department.

In the letter penned by DHS chief privacy officer Phillip S Kaplan, the department offered all individuals potentially affected by the incident 18 months of free credit monitoring and identity protection services.

“The Department of Homeland Security takes very seriously the obligation to serve the department’s employees, and is committed to protecting the information in which they are entrusted,” the department wrote. “Please be assured that we will make every effort to ensure this does not happen again.”

DHS said it is implementing additional security precautions to limit which individuals have access to its information, as well as more stringent checks to identify unusual access patterns.


Privacy Foundation: Trusting government with open data a ‘recipe for pain’

The Australian Privacy Foundation (APF) has called out the federal government and the Office of the Australian Information Commissioner (OAIC) after failing to publish a report on the September 2016 incident that revealed Medicare Benefits Schedule and Pharmaceutical Benefits Scheme data was not encrypted properly.

The dataset was found by a team of researchers from the University of Melbourne and was subsequently pulled down by the Department of Health.

At the time, the OAIC announced it was investigating the publication of the datasets, however more than 12 months later, it is still investigating.

Of concern to the APF is that there has been no public report, nor warning about the bug in open data; no indication of when the report will be released; and no requirement to reconsider the misplaced trust in the de-identification of open data.

“You should be able to trust governments to care for sensitive personal data about yourself and your family. Clearly some of those who are handling this data either lack expertise, or are careless: It appears that ‘Open Data’ protections can be breached,” a statement from the APF reads.

While the APF agrees there can be benefits from the sharing of health and other personal information among health care professionals and researchers, it said the sharing must be based on an understanding of potential risks.

“It must only occur within an effective legal framework, and controls appropriate for those risks,” the APF continued.

“A ‘Trust me, I’m from the government!’ approach is a recipe for pain. So is sharing such sensitive data with government without full openness, transparency, and a legal framework that prevents them from misusing it out of the public eye.”

The research team that re-identified the data in September 2016, consisting of Dr Chris Culnane, Dr Benjamin Rubinstein, and Dr Vanessa Teague, reported in December further information such as medical billing records of one-tenth of all Australians — approximately 2.9 million people — were potentially re-identifiable in the same dataset.

“We found that patients can be re-identified, without decryption, through a process of linking the unencrypted parts of the record with known information about the individual such as medical procedures and year of birth,” Dr Culnane said.

“This shows the surprising ease with which de-identification can fail, highlighting the risky balance between data sharing and privacy.”

The team warned that they expect similar results with other data held by the government, such as Census data, tax records, mental health records, penal data, and Centrelink data.

The large-scale dataset relating to the health of many Australians, under what the APF labelled as “the fashionable rubric of open data”, included all publicly reimbursed medical and pharmaceutical bills for selected patients spanning the thirty years from 1984 to 2014. The data as released was meant to be de-identified, meaning that it supposedly could not be linked to a particular individual.

“Unfortunately, the government got it wrong: This weak protection can be breached,” the APF added.

See also: Australian Privacy Foundation wants ‘privacy tort’ to protect health data

The Privacy Foundation believes the Department of Health and its minister should be held to account for the data being re-identifiable, as well as the OAIC, with APF expanding on its previous claims the agency led by Timothy Pilgrim was being “underfed”.

“The OAIC should act like a watchdog, not like a rather timid snail,” the APF said on Monday, hoping the appointment of a new Attorney-General after George Brandis was replaced by former Minister for Social Services Christian Porter in December will “provide adequate resources” to the agency.

As a result of the issues found by the University of Melbourne, in October 2016, the Australian government proposed changes to the Privacy Act 1988 that would criminalise the intentional re-identification and disclosure of de-identified Commonwealth datasets, reverse the onus of proof, and be retrospectively applied from September 29, 2016.

Under the changes, anyone who intentionally re-identifies a de-identified dataset from a federal agency could face two years’ imprisonment, unless they work in a university or other state government body, or have a contract with the federal government that allows such work to be conducted.

The university team said the proposed legislation will have a chilling effect on research, and risks efforts to make sure open data is properly protected.

“Whilst open data is not a safe approach for releasing this type of data, open government is the right paradigm for deciding what is,” the team said. “One thing is certain: Open publication of de-identified data is not a secure solution for sensitive unit-record level data.”

Speaking a few months after the first batch of information was re-identified, Pilgrim said building trust with the public is key to the challenges big data presents for organisations, including government, and highlighted that trust is further challenged by the nature of secondary uses of data.

“Part of the solution, potentially a significant part I suggest, lies in getting de-identification right,” he said during a data sharing and interoperability workshop at the GovInnovate summit in Canberra late 2016.

“This includes ensuring that government agencies, regulators, businesses, and technology professionals have a common understanding as to what ‘getting it right’ means.

“At the moment, that common clarity is not evident.”

While Pilgrim said that de-identification can be a smart and contemporary response to the privacy challenges of big data, which he said aims to separate the “personal” from the “information” within data sets, the commissioner highlighted that there was no clear-cut definition of how far-removed personal identifiers needed to be before the dataset is considered de-identified.

“I stress as privacy commissioner that de-identification is not the only approach available to manage the privacy dimensions of big data, but we are keen to explore its potential when done fully and correctly,” he said.

“That potential could include the ability to facilitate data sharing between agencies, and unlock policy and service gains of big data innovation, whilst protecting the fundamental human right to privacy.

“That is a great prospect, and one worth pursuing.”

See also: OAIC and Data61 offer up data de-identification framework

Given the investigation into the MBS and PBS datasets is ongoing, the OAIC said on Monday it is unable to comment on it further at this time.

“The commissioner will make a public statement at the conclusion of the investigation,” a statement from the OAIC reads.

“The OAIC continues to work with Australian Government agencies to enhance privacy protection in published datasets.”


System Requirements

Both OsMonitor Server and Client can work on Windows XP, Windows Server 2003/08/12/2016, Windows 7, Windows 8/8.1, Windows 10. Include 32 bit and 64 bit.

Customer Review

We are now using your monitoring software, OsMonitor. It is a great software, we are able to block non-business website, monitor activities of our users, website visited and even snap shots. Majority of our need is provided by your software.