Showing posts with label blockchain. Show all posts
Showing posts with label blockchain. Show all posts

Thursday, December 8, 2022

It’s Time to Take Another Look at Blockchain

MIT Sloan Management Review, December 8, 2022

Ravi Sarathy, interviewed by Theodore Kinni



It wasn’t long after the developers of bitcoin first used a distributed ledger to record transactions in 2008 that the blockchain revolution was announced with all the fanfare that usually accompanies promising new technologies. Then, as often happens with emerging technologies, blockchain’s promise collided with developmental realities.

Now, a decade and a half down the road, that early promise is becoming manifest. In his new book, Enterprise Strategy for Blockchain: Lessons in Disruption From Fintech, Supply Chains, and Consumer Industries, Ravi Sarathy, professor of strategy and international business at the D’Amore-McKim School of Business at Northeastern University, argues that distributed ledger technology has matured to the point of enabling a host of applications that could disrupt industries as diverse as manufacturing, medicine, and media.

Sarathy spoke with Ted Kinni, senior contributing editor of MIT Sloan Management Review, about the state of blockchain, the applications that are most relevant now for large companies, and how their leaders can harness the technology before established and new competitors use it against them.


MIT Sloan Management Review: Blockchain has been slow to gain traction in many large companies. What’s holding it back?

Sarathy: Blockchain is a complex technology. It is often secured by an elaborate mathematical puzzle that is energy intensive and requires large investments in high-powered computing. This also limits the volume of transactions that can be processed easily, making it hard to use blockchain in a setting like credit card processing, which involves thousands of transactions a second. Interoperability is another technological challenge. You’ve got a lot of different protocols for running blockchains, so if you need to communicate with other blockchains, it creates points of weakness that can be hacked or otherwise fail.

Aside from the technological challenges, there is the issue of cost and benefit. Blockchain is not free, and it’s not an easy sell. It requires significant financial and human resources, and that’s a problem because it’s hard to convince CFOs and other top managers to give you a few million dollars and a few years to develop a blockchain application when they do not have clear estimates of expected returns or benefits.

Lastly, there are organizational challenges. A blockchain is intended to be a transparent, decentralized network in which everyone talks to each other without any intermediaries organized in a world of hierarchies. Making that transition can require a long philosophical and cultural leap for traditional companies used to a chain of command. Trust, too, becomes a huge issue, particularly when you start adding independent firms to a blockchain. Read the rest here.

Thursday, August 3, 2017

Blockchain is poised to disrupt trade finance

Learned a lot lending an editorial hand here:

PwC Next in Tech blog, August 3, 2017

by Grainne McNamara




Trade finance has enabled the exchange of goods for millennia. Babylonian cuneiform tablets dating back to 3000 BC mention the kind of promissory notes and letters of credit that still underpin international trade. And, aside from incremental improvements in the ways and means of trade finance, not all that much has changed in the fundamental elements of this approximately US$40 billion sector of the financial services industry over the past 5,000 years.

That is, until now. The advent of blockchain technology is on the verge of revolutionizing trade finance—and it threatens to leave behind any financial services company that doesn’t move with the times. Read the rest here.

Thursday, November 10, 2016

TechSavvy: Is Your Company Winning the Race to Digital Transformation?

Digital Transformation Race WinningMIT Sloan Management Review, November 10, 2016

by Theodore Kinni


In some respects, the digitization of business is a pretty nebulous subject. It’s not like a company achieves digital transformation on some specific date — the darn target moves as new technologies and applications appear. That’s one reason why Jane McConnell’s 10th annual inquiry into “The Organization in the Digital Age” is worth a look.

McConnell frames digital transformation as an organizational imperative that manifests itself in three dimensions: people, workplace, and technology. Over the past decade, she has been gauging the progress that a broad, international group of 300+ companies and other institutions has been making toward this imperative in three stages.

The Starting stage is defined by an individual (rather than organizational) digital awareness — digital initiatives are ad hoc and infrequent; senior leaders are minimally involved; most decisions are made by traditional hierarchy; work mainly takes place in established channels, with some virtual venues. The Developing stage is defined by mobilization — a compelling vision for digital transformation exists; senior managers are leading the charge; most functions, levels, and entities are involved in digital initiatives. The Maturing stage is defined by trust — digital is considered a strategic asset; it is embedded in work practices; much decision making is decentralized; information and collaboration is organization-wide and includes customers and external partners.

“The 2016 data shows 16% of the survey participants in the Maturing stage, 52% in the Developing stage, and 32% in the Starting stage,” McConnell reports. Where does your company place? Read the rest here.

Thursday, September 8, 2016

TechSavvy: A Code of Ethics for Smart Machines


MIT Sloan Management Review, September 8, 2016

by Theodore Kinni


Smart machines need ethics, too: Remember that movie in which a computer asked an 
impossibly young Matthew Broderick, “Shall we play a game?” Four decades later, it turns out that global thermonuclear war may be the least likely of a slew of ethical dilemmas associated with smart machines — dilemmas with which we are only just beginning to grapple.

The worrisome lack of a code of ethics for smart machines has not been lost on Alphabet, Amazon, Facebook, IBM, and Microsoft, according to a report by John Markoff in The New York Times. The five tech giants (if you buy Mark Zuckerberg’s contention that he isn’t running a media company) have formed an industry partnership to develop and adopt ethical standards for artificial intelligence — an effort that Markoff infers is motivated as much to head off government regulation as to safeguard the world from black-hearted machines.

On the other hand, the first of a century’s worth of quinquennial reports from Stanford’s One Hundred Year Study on Artificial Intelligence (AI100) throws the ethical ball into the government’s court. “American law represents a mixture of common law, federal, state, and local statutes and ordinances, and — perhaps of greatest relevance to AI — regulations,” its authors declare. “Depending on its instantiation, AI could implicate each of these sources of law.” But they don’t offer much concrete guidance to lawmakers or regulators — they say it’s too early in the game to do much more than noodle about where ethical (and legal) issues might emerge.

In the meantime, if you’d like to get a taste for the kinds of ethical decisions that smart machines — like self-driving cars — are already facing, visit MIT’s Moral Machine project. Run through the scenarios and decide for yourself who or what the self-driving car should kill. Aside from the fun of deciding whether to run over two dogs and a pregnant lady or drive two old guys into the concrete barrier, it’ll help the research team create a crowd-sourced view of how humans might expect of ethical machines to act. This essay from UVA’s Bobby Parmar and Ed Freeman will also help fuel your thinking. Read the rest here.

Thursday, May 26, 2016

Tech Savvy: Two Questions for Managers of Learning Machines

by Theodore Kinni
Two questions that managers of intelligent machines should ask: It’s been a couple of years since Stephen Hawking warned that artificial intelligence could “spell the end of the human race.” The terminators aren’t here yet and unless they come very soon, the managers of AI-based technology have a couple of more immediate issues to address, according to Vasant Dhar of NYU’s Stern School of Business and Center for Data Science.
The first, which Dhar takes up in a new article on TechCrunch, is how to “design intelligent learning machines that minimize undesirable behavior.” Pointing to two high-profile juvenile delinquents, Microsoft’s Tay and Google’s Lexus, he reminds us that it’s very hard to control AI machines in complex settings. “There is no clear answer to this vexing issue,” says Dhar. But he does offer some guidance: Analyze the machine’s training errors; use an “adversary” — through means such as crowdsourcing — to try to trip up the machine; and estimate the cost of error scenarios to better manage risks.
The second question, which Dhar explores in an article for HBR.org, is when and when not to allow AI machines to make decisions. “We don’t have any framework for evaluating which decisions we should be comfortable delegating to algorithms and which ones humans should retain,” he writes. “That’s surprising, given the high stakes involved.” Dhar suggests addressing this issue with a risk-oriented framework that he calls a Decision Automation Map. The map plots decisions in two independent dimensions — predictability and cost per error — and suggest whether it would be better made by human or machine ...read the rest here