Showing posts with label apps. Show all posts
Showing posts with label apps. Show all posts

Wednesday, October 2, 2019

Need to Work Differently? Learn Differently

Learned a lot lending an editorial hand here:

Boss Magazine, October 2019

by Michael Griffiths


Digitization requires a new set of skills and a new set of training for employees


The next time your company holds an all-hands meeting, look around the room — or the arena — and consider this: It’s likely that more than half the people present will need reskilling or upskilling in the next three years.

This probably doesn’t come as a complete surprise to you. The forces of change are transforming every aspect of work, including what is done, who does it, and where it is done.

For example, emerging technologies — especially AI and machine learning — are among the most disruptive of these forces. In fact, 81 percent of respondents to Deloitte’s 2019 Global Human Capital Trends survey indicated they expect the use of AI to increase or increase significantly over the next three years. Unlike some, we don’t believe that AI will eliminate the need for a workforce. Instead, we anticipate the rise of hybrid jobs, which are enabled by digitization, technology, and the emergence of a new kind of job, which we call the superjob. A superjob combines work and responsibilities from multiple traditional jobs, using technology to both augment and broaden the scope of the work performed and involving a more complex set of digital, technical, and human skills.

Hybrid jobs and superjobs can enable your company to be more responsive to customers and adaptable to change. But it requires a more deliberate and agile approach to capability development. Already, many companies are responding to this need: Our research finds that 83 percent of organizations are increasing their investments in reskilling programs, and more than half (53 percent) increased their learning and development budgets by 6 percent or more in 2018.

But will more learning be enough at your company? It’s doubtful. To Work Differently we think your company should first Learn Differently. Read the rest here.

Thursday, September 8, 2016

TechSavvy: A Code of Ethics for Smart Machines


MIT Sloan Management Review, September 8, 2016

by Theodore Kinni


Smart machines need ethics, too: Remember that movie in which a computer asked an 
impossibly young Matthew Broderick, “Shall we play a game?” Four decades later, it turns out that global thermonuclear war may be the least likely of a slew of ethical dilemmas associated with smart machines — dilemmas with which we are only just beginning to grapple.

The worrisome lack of a code of ethics for smart machines has not been lost on Alphabet, Amazon, Facebook, IBM, and Microsoft, according to a report by John Markoff in The New York Times. The five tech giants (if you buy Mark Zuckerberg’s contention that he isn’t running a media company) have formed an industry partnership to develop and adopt ethical standards for artificial intelligence — an effort that Markoff infers is motivated as much to head off government regulation as to safeguard the world from black-hearted machines.

On the other hand, the first of a century’s worth of quinquennial reports from Stanford’s One Hundred Year Study on Artificial Intelligence (AI100) throws the ethical ball into the government’s court. “American law represents a mixture of common law, federal, state, and local statutes and ordinances, and — perhaps of greatest relevance to AI — regulations,” its authors declare. “Depending on its instantiation, AI could implicate each of these sources of law.” But they don’t offer much concrete guidance to lawmakers or regulators — they say it’s too early in the game to do much more than noodle about where ethical (and legal) issues might emerge.

In the meantime, if you’d like to get a taste for the kinds of ethical decisions that smart machines — like self-driving cars — are already facing, visit MIT’s Moral Machine project. Run through the scenarios and decide for yourself who or what the self-driving car should kill. Aside from the fun of deciding whether to run over two dogs and a pregnant lady or drive two old guys into the concrete barrier, it’ll help the research team create a crowd-sourced view of how humans might expect of ethical machines to act. This essay from UVA’s Bobby Parmar and Ed Freeman will also help fuel your thinking. Read the rest here.

Tuesday, June 7, 2016

Tech Savvy: Exploring the Ethical Limits of App Design

by Theodore Kinni
Are your employee apps ethical? Companies are providing employees with more and more digital services for purposes that range from enhancing teamwork to getting a better night’s sleep. But do they promote agency — or addiction? Perhaps it’s time for managers to take a closer look at the design of those services — and question the techniques they employ to create a compelling user experience.
Toward this end, Tristan Harris has some choice words in a new article on Medium. “I’m an expert on how technology hijacks our psychological vulnerabilities,” he begins. “That’s why I spent the last three years as Google’s Design Ethicist caring about how to design things in a way that defends a billion people’s minds from getting hijacked. When using technology, we often focus optimistically on all the things it does for us. But I want you to show you where it might do the opposite.”
Harris goes on to call out common hijacks that are intentionally and unintentionally built into the design of websites and apps. They include: menus that give the impression of choice, while limiting it; the embedding of intermittent, variable rewards that induce addictive behaviors; reliance on powerful motivators such as social approval and reciprocity; and seven more.
“I’ve listed a few techniques but there are literally thousands,” adds Harris. “Imagine whole bookshelves, seminars, workshops and trainings that teach aspiring tech entrepreneurs techniques like these. Imagine hundreds of engineers whose job every day is to invent new ways to keep you hooked.”
Harris, who studied under Professor BJ Fogg in Stanford’s Persuasive Technology Lab, is talking about big social media services offered to the general public by companies, such as Facebook, Instagram, TripAdvisor, and NYTimes.com. But his conclusion applies to digital services aimed at employees, too:
“The ultimate freedom is a free mind, and we need technology that’s on our team to help us live, feel, think and act freely. We need our smartphones, notifications screens and web browsers to be exoskeletons for our minds and interpersonal relationships that put our values, not our impulses, first. People’s time is valuable. And we should protect it with the same rigor as privacy and other digital rights.”