Get a degree in weed, get a job

No other 4-year undergraduate degree program in the world combines rigorous coursework in chemistry and biology with research and hands on instrumental analysis built into the curriculum to prepare its graduates for a career in the cannabis industry.  The additional focus on entrepreneurship and laboratory accreditation standards means that our graduates will not only be qualified to perform the instrumental analysis in a laboratory, but will also be empowered to build their own testing laboratory, dispensary, and growing operation from the ground up.

Northern Michigan University launched a degree in cannabis, Officially titled, Medicinal Plant Chemistry, it’s interdisciplinary too. It combines chemistry, biology, botany, horticulture, marketing and finance.

“We’ve had an overwhelming response from growing operations, dispensaries and other businesses who want to take on our students as interns,” said Canfield, adding that a stereotypical stoner need not apply.

That’s the part that caught my attention: employability. Gardeners and concentrate makers can earn in the six figures and job listings aplenty are listed for the cannabis industry. These graduates will be in demand.

In my last role at an east coast Ivy League career services center, several MBA students were interested in the cannabis industry. It’s not hard to see why with a projected market growth of $21.6 billion by 2021. Yet career services wouldn’t touch it. They weren’t open to building relationships with cannabis employers, despite the thriving industry that was alive and well out West (as the Pacific Northwest transplant, I advised students on opportunities). Innovation was cool as long as it fit within what was considered status quo.

While I wrote this post to share this new intersection of cannabis and higher education, there’s a bigger takeaway: Innovation doesn’t necessarily come from the top schools. There’s a tendency in higher education circles to look towards the Ivies and Stanfords as exemplars of innovation. But that’s misguided because they’re well-funded, resource-rich institutions. Sometimes they’re actually risk-adverse. I’d like to see that mentality change in higher education. Innovation can be found across all types of institutions, not just the Ivy leagues. Take a look at Vanderbuilt’s work on digital pedagogy. Or the partnership between 11 universities to improve retention rates among low income students. Then there’s the future-oriented career services training at Hazard Community College. Lansing Community college is using open resources and free online texts instead of textbooks to make college more affordable. In the online education space, an area traditionally thought of as US dominated, there are fascinating ideas happening outside the US.

No doubt NMU is blowing up because cannabis is sexy hot right now. I’d love to see higher education circles promote more creative, forward-thinking degree programs from lesser known schools.

Update: I just searched cannabis jobs on LinkedIn and 420 results were displayed. Well played LinkedIn, well played. 

Cannabis jobs

Will black box algorithms be the reason you don’t get your next job?

A good example is today’s workplace, where hundreds of new AI technologies are already influencing hiring processes, often without proper testing or notice to candidates. New AI recruitment companies offer to analyze video interviews of job candidates so that employers can “compare” an applicant’s facial movements, vocabulary and body language with the expressions of their best employees. But with this technology comes the risk of invisibly embedding bias into the hiring system by choosing new hires simply because they mirror the old ones.

– Artificial Intelligence—With Very Real Biases

Beyond bias we should be asking serious questions about the data that these algorithms are based on: what data are they using to determine the connection between facial movements, vocabulary, and body language as predictors of job performance?

More from the article above:

“New systems are also being advertised that use AI to analyze young job applicants’ social media for signs of “excessive drinking” that could affect workplace performance. This is completely unscientific correlation thinking, which stigmatizes particular types of self-expression without any evidence that it detects real problems. Even worse, it normalizes the surveillance of job applicants without their knowledge before they get in the door.

GE helps employees make their internal moves

GE isn’t a company that comes to mind as innovative, yet their current work in talent development and helping employees navigate their careers is quite forward-thinking:

Using data on the historical movement of GE employees and the relatedness of jobs (which is based on their descriptions), the app helps people uncover potential opportunities throughout the company, not just in their own business unit or geography. Lots of companies post open positions on their websites. What’s different about this tool, says Gallman, is that it shows someone jobs that aren’t open so that he or she can see what might be possible in his or her GE career.

Showing employees what’s possible, regardless if the opportunity is available, is a smart move. It helps anchor the company in the employees mind, giving them a path to work towards. I left a few jobs because I had no idea of what was possible (and neither did my boss). Having multiple paths to explore can open up valuable conversations and go a long ways in retaining talent. Pair that with a new tool that “recommends the training or education someone needs to better perform his or her existing job and to progress.” GE is making clever use of new analytics and algorithmic tools to retain employees.

The Future of Work from an L&D perspective

As stewards of your company’s value, you need to understand how to get your people ready—not because it’s a nice thing to do but because the competitive advantage of early adopters of advanced algorithms and robotics will rapidly diminish. Simply put, companies will differentiate themselves not just by having the tools but by how their people interact with those tools and make the complex decisions that they must make in the course of doing their work. The greater the use of information-rich tools, the more important the decisions are that are still made by people. That, in turn, increases the importance of continuous learning. Workers, managers, and executives need to keep up with the machines and be able to interpret their results. – Putting Lifelong Learning on the Agenda,McKinsey Insights

Here’s a company that’s living that advice:

“The future of learning sabbaticals at Buffer is closely tied with our desire to help create the future of work. There’s a quote from Stephanie Ricci, head of learning at AXA that’s really powerful in explaining how much impact learning will have for employees in the future:

“By 2020, the core skills required by jobs are not on the radar today, hence we need to rethink the development of skills, with 50% of our jobs requiring significant change in terms of skillset”

That is a huge amount of jobs that will require new skills and for organizations and workers that means a lot of learning and developing.”

Why this company implemented a learning sabbatical for its employees, FastCo

Are your skills still relevant?

Depending on when you graduated college, they might not be:

“The time it takes for people’s skills to become irrelevant will shrink. It used to be, “I got my skills in my 20s; I can hang on until 60.” It’s not going to be like that anymore. We’re going to live in an era of people finding their skills irrelevant at age 45, 40, 35. And there are going to be a great many people who are out of work.” – Getting Ready for the Future of Work

 

How was this algorithm designed?

Algorithms are everywhere. They make decisions for us and most the time we don’t realize it. Remember the United story where the passenger was violently ripped out of his seat? The decision to remove that specific was the result of an algorithm.

As more algorithms shape our life we must ask questions like who’s designing these algorithms, what assumptions do these designers make, and what are the implications of those assumptions?

So I’m giving a huge shout out to the podcast 99% Design for their episode on how algorithms are designed.

The Age of the Algorithm

Featuring the author of Weapons of Math Destruction, the episode takes a look at the subjective data used for algorithms that determine recidivism rates and reject job applicants. The examples used and questioned raised in this episode should have us asking more questions about the people and companies designing the algorithms that run in the background of our online and offline lives.

“Algorithms … remain unaudited and unregulated, and it’s a problem when algorithms are basically black boxes. In many cases, they’re designed by private companies who sell them to other companies. The exact details of how they work are kept secret.”

Do you ever feel like you need to go back to school so you can catch up?

This thirst for AI has pushed all AI-related courses on Stanford to way over their capacity. CS224N: Natural Language Processing with Deep Learning had more than 700 students. CS231N: Convolutional Neural Networks for Visual Recognition had the same. According to Justin Johnson, co-instructor of CS231N, the class size is exponentially increasing. At the beginning of the quarter, instructors for both courses desperately scramble to find extra TAs. Even my course, first time offered, taught by an obscure undergraduate student, received 350+ applications for its 20 spots. Many of the students who took these courses aren’t even interested in the subject. They just take those courses because everyone is doing it”

-excerpt from Confession of a so-called AI Expert.

The author, Chip Hyuen, is a third year student and TensorFlow TA at Stanford. She’s got a fab internship at Netflix and a killer writing style. The full article is a must-read, in part so you can fully appreciate the last sentences:

“Maybe one day people would realize that many AI experts are just frauds. Maybe one day students would realize that their time would be better spent learning things they truly care about. Maybe one day I would be out of job and left to die alone on the sidewalk. Or maybe the AI robot that I build would destroy you all. Who knows?”

Perspective: Job loss to automation and technology in the retail sector

The data in the US, “land of shopping and malls”, is staggering. In 2017, year to date, there have already been more bankruptcies in this sector compared to the data from all of 2016. Employment data in steadily declining, department stores alone reduced more than 100,000 jobs in six months (!) and they now employ one-third the number of employees they had in 2001. For a comparison, that is 18 times the loss of jobs in the coal mining industry over the same period.

When Robots Take Over Retail

AI is going to make your asshole manager even worse

Before you continue reading, reflect on the last bad manager you had. Remember how they made you feel. Remember the things they did that made your life miserable. Remember the incompetence. Remember that managers don’t get promoted to management because they’re good managers.

I know, it’s not pleasant. I’ve have some pretty awful managers too (but I’ve also had a billion jobs so it’s inevitable).

Ok. Now read on.

HR tech is hot. Nearly $2 billion in investment hot. And AI is hotter than bacon. So combining HR tech and AI is a sizzling idea (still with me?).

Enter all the startups ready to make managers lives easier/employees lives more miserable with algorithms to solve all the HR problems. The Wall Street Journal takes a peak into the future of management in How AI is Transforming the Workplace:

“Veriato makes software that logs virtually everything done on a computer—web browsing, email, chat, keystrokes, document and app use—and takes periodic screenshots, storing it all for 30 days on a customer’s server to ensure privacy. The system also sends so-called metadata, such as dates and times when messages were sent, to Veriato’s own server for analysis. There, an artificial-intelligence system determines a baseline for the company’s activities and searches for anomalies that may indicate poor productivity (such as hours spent on Amazon), malicious activity (repeated failed password entries) or an intention to leave the company (copying a database of contacts).Customers can set activities and thresholds that will trigger an alert. If the software sees anything fishy, it notifies management.”

Now remember your asshole manager. Imagine if they had access to this tool. Imagine the micromanagement.

Brutal.

(Side note: I wonder if employees get access to their bosses computer logs. Imagine that!)

Let’s keep going.

Another AI service lets companies analyze workers’ email to tell if they’re feeling unhappy about their job, so bosses can give them more attention before their performance takes a nose dive or they start doing things that harm the company.

Yikes.

It’s hard not to read that as an unhappy worker is somehow a threat to the company. Work isn’t all rainbows and unicorns. We can’t be happy 40 hours a week even in the best of jobs. Throughout our work lives we deal with grief, divorce, strained friendships, children, boredom, indecision, bad coworkers, bad bosses, bad news, financial stress, taking care of parents, etc etc etc. And sometimes that comes out in the course of our days spent buried in emails. The idea of management analyzing your emails on the watch for anything that isn’t rainbows ignores the reality of our work lives.

What data is the algorithm built on? What are the signs of unhappiness? Bitching about a coworker? Complaining about an unreasonable deadline? Micromanaging managers? What’s the time frame? One day of complaints or three weeks? Since algorithms take time to tweak and learn, what happens to employees (and their relationships with management) who are incorrectly flagged as unhappy while the algorithm learns?

Moreover, what do those conversations look like when “unhappy” employees are being called into management’s office?

Manager: Well we’ve called you in because our Algorithm notified me that you’re unhappy in your role.

Employee:

Manager: Right… so … can you tell me what’s making you so unhappy?

Employee: I’m fine.

Manager:

Not according to The Algorithm. It’s been analyzing all your emails. I noticed you used the word “asshat” twice in one week to describe your cubicle mate. Your use of the f word is off the charts compared to your peers on the team. You haven’t used an exclamation point to indicate anything positive in at least three weeks. The sentiment analysis shows you’re an 8 out of 10 on the unhappy chart. Look, here’s the emoji the algorithm assigned to help you understand your unhappiness level.

Employee: It’s creepy you’re reading my emails.

Manager:

Now remember, you signed that privacy agreement at the beginning of your employment and consented to this. You should never write anything in a company email that you don’t want read.

Employee:

And do companies who purchase this technology even ask the hard questions?

The issue I have with this tech, apart from it being ridiculously creepy, is that it makes some seriously bad assumptions. They assume:

  • All managers have inherently good intentions
  • All managers are competent
  • All organizations train their managers on how to be effective managers
  • All organizations train their managers on appropriate use of technology
  • Managers embrace new technology

Those are terrible assumptions. Here’s a brief, non-exhaustive list of issues I’ve had with managers over the past ten years:

  • Managers who can’t define what productivity looks like (beyond DO ALL THE THINGS)
  • Managers who can’t set and communicate goals
  • Managers who can’t listen to concerns voiced by the team (big egos)
  • Managers who can’t understand lead scoring and Google analytics (from the CEO and VP of sales and marketing no doubt)
  • Managers who can’t use a conference call system (technology-am-I-right?!)
  • Managers with no interpersonal communication skills and lack of self-awareness

Maybe we can all save ourselves by adding a new question when it’s our turn to ask questions in the interview:

“Tell me about your approach to management. What data do you use to ensure your AI technology accurately assesses employee happiness?”

Maybe I’m just cynical. Maybe it’s because I’ve had a few too many bad managers (as have my peers.). Maybe I just feel sorry for good employees struggling under bad management. And maybe organizations should get better about promoting people who can manage (i.e. people with soft skills) instead of those who can’t before this technology is adapted.

Anyhow, to wrap up, this whole post has my feeling so grateful for the good managers I’ve had. The ones who got it right. Who listened, encouraged, and provided constructive feedback on all my work. And though I’m sure they’re not reading this post, a shout out to my favorite, amazing managers from two very different jobs: Kirsten and Cathy. They didn’t need an algorithm to understand their team performance and employee happiness. They had communication skills, empathy, and damn good skills that made working for them a delight.