Light listening: Algorithmic surveillance

We’re so used to hearing about algorithms now that most people don’t spend much time thinking much about them. They operate in the background invisibly shaping our decisions as we go about our day. Most of us are quite clueless about how we’re manipulated by this technology.

This 22 minutes talk from techno-sociologist Zeynep Tufekci is the antidote to that ignorance. As the Dr Tufecki explains, these algorithms do more than make ads follow us around. They power Facebook’s dark ads that are used to manipulate voters and form the foundation for surveillance authoritarianism. Worse yet, it’s hard to know exactly how these algorithms operate and how we’re being affected.

Here’s a snippet from her talk:

Now, we started from someplace seemingly innocuous — online adds following us around — and we’ve landed someplace else. As a public and as citizens, we no longer know if we’re seeing the same information or what anybody else is seeing, and without a common basis of information, little by little, public debate is becoming impossible, and we’re just at the beginning stages of this. These algorithms can quite easily infer things like your people’s ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age and genders, just from Facebook likes. These algorithms can identify protesters even if their faces are partially concealed. These algorithms may be able to detect people’s sexual orientation just from their dating profile pictures.

Now, these are probabilistic guesses, so they’re not going to be 100 percent right, but I don’t see the powerful resisting the temptation to use these technologies just because there are some false positives, which will of course create a whole other layer of problems. Imagine what a state can do with the immense amount of data it has on its citizens. China is already using face detection technology to identify and arrest people. And here’s the tragedy: we’re building this infrastructure of surveillance authoritarianism merely to get people to click on ads. And this won’t be Orwell’s authoritarianism. This isn’t “1984.” Now, if authoritarianism is using overt fear to terrorize us, we’ll all be scared, but we’ll know it, we’ll hate it and we’ll resist it. But if the people in power are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels, to deploy persuasion architectures at scale and to manipulate individuals one by one using their personal, individual weaknesses and vulnerabilities, and if they’re doing it at scale through our private screens so that we don’t even know what our fellow citizens and neighbors are seeing, that authoritarianism will envelop us like a spider’s web and we may not even know we’re in it.

This 22 minutes will bring you up to speed on how algorithms are shaping our lives and what it means for the future.

 

 

 

While the talk above focuses a lot on Facebook, Dr Tufekci points out Amazon too is leading the way in algorithmic surveillance, especially with its release of Echo Look.

 

If this subject interests you check out the book, Weapons of Math Destruction. It’s a deeper dive into how algorithms shape our lives. And it’s a quick read.

So how you feeling about your future career?

“So what should we tell our children? That to stay ahead, you need to focus on your ability to continuously adapt, engage with others in that process, and most importantly retain your core sense of identity and values. For students, it’s not just about acquiring knowledge, but about how to learn. For the rest of us, we should remember that intellectual complacency is not our friend and that learning – not just new things but new ways of thinking – is a life-long endeavour.” Blair Sheppard Global Leader, Strategy and Leadership Development, PwC

60% think ‘few people will have stable, long-term employment in the future’. PwC survey of 10,029 members of the general population based in China, Germany, India, the UK and the US.

74% believe it’s their own responsibility to update their skills rather than relying on any employer.

Source: PWC Workforce of the Future report.

Upward mobility and clear career progression are no longer guaranteed. So how does this shape what we teach students about their careers? Learning to write a resume and taking career assessments seem quite pointless in the face of type of change.

Hiring practices are about to get even more opaque

All that advice about plugging keywords into your resume to make sure it passes the ATS systems is about to be useless. Here’s an excerpt from AI for Recruiting: A Definitive Guide to for HR Professionals by Ideal.com, a AI-powered resume screening and candidate tracking solution for busy recruiters.

Intelligent screening software automates resume screening by using AI (i.e., machine learning) on your existing resume database. The software learns which candidates moved on to become successful and unsuccessful employees based on their performance, tenure, and turnover rates. Specifically, it learns what existing employees’ experience, skills, and other qualities are and applies this knowledge to new applicants in order to automatically rank, grade, and shortlist the strongest candidates.The software can also enrich candidates’ resumes by using public data sources about their prior employers as well as their public social media profiles.

Now for all the questions: What are the “other qualities” that they measure? How much weight do they give to experience vs. skills? How much data does a company need to use these algorithms effectively? How does a company without loads of data use this technology? Who decides which data to use? Who reviews the training data for accuracy and bias – the company or the vendor? How does this company avoid bias, especially if people who advance are all white men (due to unconscious bias in the promotion process)? What data points are most valuable on candidates social profiles? Which social profiles are they pulling from? Are personal websites included? Which companies are using this technology? Are candidates without publicly available social media data scored lower? Of the companies using these technologies, who’s responsible for asking the questions above?

This technology gives a whole new meaning to submitting your resume into a black hole.

AI is going to make your asshole manager even worse

Before you continue reading, reflect on the last bad manager you had. Remember how they made you feel. Remember the things they did that made your life miserable. Remember the incompetence. Remember that managers don’t get promoted to management because they’re good managers.

I know, it’s not pleasant. I’ve have some pretty awful managers too (but I’ve also had a billion jobs so it’s inevitable).

Ok. Now read on.

HR tech is hot. Nearly $2 billion in investment hot. And AI is hotter than bacon. So combining HR tech and AI is a sizzling idea (still with me?).

Enter all the startups ready to make managers lives easier/employees lives more miserable with algorithms to solve all the HR problems. The Wall Street Journal takes a peak into the future of management in How AI is Transforming the Workplace:

“Veriato makes software that logs virtually everything done on a computer—web browsing, email, chat, keystrokes, document and app use—and takes periodic screenshots, storing it all for 30 days on a customer’s server to ensure privacy. The system also sends so-called metadata, such as dates and times when messages were sent, to Veriato’s own server for analysis. There, an artificial-intelligence system determines a baseline for the company’s activities and searches for anomalies that may indicate poor productivity (such as hours spent on Amazon), malicious activity (repeated failed password entries) or an intention to leave the company (copying a database of contacts).Customers can set activities and thresholds that will trigger an alert. If the software sees anything fishy, it notifies management.”

Now remember your asshole manager. Imagine if they had access to this tool. Imagine the micromanagement.

Brutal.

(Side note: I wonder if employees get access to their bosses computer logs. Imagine that!)

Let’s keep going.

Another AI service lets companies analyze workers’ email to tell if they’re feeling unhappy about their job, so bosses can give them more attention before their performance takes a nose dive or they start doing things that harm the company.

Yikes.

It’s hard not to read that as an unhappy worker is somehow a threat to the company. Work isn’t all rainbows and unicorns. We can’t be happy 40 hours a week even in the best of jobs. Throughout our work lives we deal with grief, divorce, strained friendships, children, boredom, indecision, bad coworkers, bad bosses, bad news, financial stress, taking care of parents, etc etc etc. And sometimes that comes out in the course of our days spent buried in emails. The idea of management analyzing your emails on the watch for anything that isn’t rainbows ignores the reality of our work lives.

What data is the algorithm built on? What are the signs of unhappiness? Bitching about a coworker? Complaining about an unreasonable deadline? Micromanaging managers? What’s the time frame? One day of complaints or three weeks? Since algorithms take time to tweak and learn, what happens to employees (and their relationships with management) who are incorrectly flagged as unhappy while the algorithm learns?

Moreover, what do those conversations look like when “unhappy” employees are being called into management’s office?

Manager: Well we’ve called you in because our Algorithm notified me that you’re unhappy in your role.

Employee:

Manager: Right… so … can you tell me what’s making you so unhappy?

Employee: I’m fine.

Manager:

Not according to The Algorithm. It’s been analyzing all your emails. I noticed you used the word “asshat” twice in one week to describe your cubicle mate. Your use of the f word is off the charts compared to your peers on the team. You haven’t used an exclamation point to indicate anything positive in at least three weeks. The sentiment analysis shows you’re an 8 out of 10 on the unhappy chart. Look, here’s the emoji the algorithm assigned to help you understand your unhappiness level.

Employee: It’s creepy you’re reading my emails.

Manager:

Now remember, you signed that privacy agreement at the beginning of your employment and consented to this. You should never write anything in a company email that you don’t want read.

Employee:

And do companies who purchase this technology even ask the hard questions?

The issue I have with this tech, apart from it being ridiculously creepy, is that it makes some seriously bad assumptions. They assume:

  • All managers have inherently good intentions
  • All managers are competent
  • All organizations train their managers on how to be effective managers
  • All organizations train their managers on appropriate use of technology
  • Managers embrace new technology

Those are terrible assumptions. Here’s a brief, non-exhaustive list of issues I’ve had with managers over the past ten years:

  • Managers who can’t define what productivity looks like (beyond DO ALL THE THINGS)
  • Managers who can’t set and communicate goals
  • Managers who can’t listen to concerns voiced by the team (big egos)
  • Managers who can’t understand lead scoring and Google analytics (from the CEO and VP of sales and marketing no doubt)
  • Managers who can’t use a conference call system (technology-am-I-right?!)
  • Managers with no interpersonal communication skills and lack of self-awareness

Maybe we can all save ourselves by adding a new question when it’s our turn to ask questions in the interview:

“Tell me about your approach to management. What data do you use to ensure your AI technology accurately assesses employee happiness?”

Maybe I’m just cynical. Maybe it’s because I’ve had a few too many bad managers (as have my peers.). Maybe I just feel sorry for good employees struggling under bad management. And maybe organizations should get better about promoting people who can manage (i.e. people with soft skills) instead of those who can’t before this technology is adapted.

Anyhow, to wrap up, this whole post has my feeling so grateful for the good managers I’ve had. The ones who got it right. Who listened, encouraged, and provided constructive feedback on all my work. And though I’m sure they’re not reading this post, a shout out to my favorite, amazing managers from two very different jobs: Kirsten and Cathy. They didn’t need an algorithm to understand their team performance and employee happiness. They had communication skills, empathy, and damn good skills that made working for them a delight.