The interaction of technology and society is sensitive and nuanced when it comes to the working world. Profound issues can arise, sometimes quite unexpectedly: have I handled privacy correctly? Will new productivity software empower or enslave? Can this algorithm really be racially biased? HR tech must work within a welter of laws and regulations, and the infinite intricacies of human interactions.
The consumer goods giant Unilever now uses AI to sift first-round job applicants. The technology, provided by Hireview, Analyses candidates’ facial expressions and vocal features during a video interview. It then selects the best matches to the features learned from a training set of interviews given by now-successful employees. How tempting – automated screening of thousands of hopefuls down to a shortlist at a fraction of the cost of conventional means…
However, there’s a catch: bias, the ever-present foible in machine-learning algorithms trained on finite datasets. Bias derived from training sets (and sometimes via algorithmic choices) is universally known but frequently played down or ignored. That is risky. Racial bias in a training set, for example, can easily be replicated and promulgated through machine learning. And the bias need not be there deliberately: customary under-representation, for example, will skew a dataset and encourage decisions that chime with its “norms”. Even Amazon got caught out when its AI hiring tools discriminated against women because they were trying to find candidates like Amazon’s current (male) workforce. However, as Prof. Sendhil Mullainathan says, biased algorithms are easier to fix than biased people! Blending technical and legal nous is a vital service to our clients who are breaking new ground, and finding new risks, in applying machine leaning to society, HR tech being a prime example.
Another area where the rush to innovate is generating novel risks is the gig economy. Gone (largely) are the days of the “master-servant” relationship and its relative legal simplicity. Employment law in those days was little more than a branch of contract law. Employment rights have grown in response to the forces of society and politics, and rightly protect the fundamentals: equal treatment, maternity, holiday, working hours and much more. Few areas of the law are changing and growing as rapidly. But who gains protection? A nine-to-five full-timer in a big company certainly looks like an employee. A touring musician, a builder, a freelance journalist – self-employed. But a Pimlico Plumber, an Uber driver, or a Deliveroo rider? A “worker”, the Courts have said, and to the surprise and cost of their respective companies, who thought their novel (tech-enabled) working models placed them outside the strictures of many legal protections afforded to workers and employees. Early legal analysis, although difficult when breaking new ground, has to be built into the design and strategy of ventures breaking the mould and enabling new ways of working.
Surveillance in workplaces is not new. But the technological possibilities now are startling: monitoring keyboard activity, application use, screenshots and webcams, Big Brother can be watching you. Only part-hidden by their euphemistic “productivity” badges, these technologies raise profound and difficult questions, legal, ethical and human. Wherefore trust, for example? As innovators and entrepreneurs seek to develop technology to manage workplace relations, and enterprises start to deploy it, we think the need for holistic advice – legal, ethical and philosophical – is set to soar. The law of unintended consequences is certain to bite the ill-advised.