The “reality/conceivability” technology gap

The recent Congressional hearings on Facebook clearly indicate a widening gap between existing mental models of tech held by folks who have only a partial sense of what’s possible, and what’s actually technologically possible with machine learning capabilities, “big data” and the location of ‘my stuff’ in the “cloud”. How might we resolve this tension at a time when it is increasingly necessary to participate in a world deeply enmeshed with technology, yet these technologies and tech services also lack regulatory —and conceptual — guardrails.

Is there really such a thing as “informed consent” when many consumers can’t really understand what Facebook does with data, what’s going on under Alexa’s hood, or the risks that the Equifax breach exposed them to?

How do we address the evolution of regulatory requirements if they are always one step behind emerging technology and business models, or hold companies to an ethical standard when business and consumer values may feel in conflict?

What responsibilities do “users” have to inform better themselves about navigating a technologically sophisticated and permeated world?  



In algorithms we trust? Navigating systems-based biases

Behavioral economics posits that humans are “irrational” and frequently don’t act in their own best or long-term interests. “Bias-beating” solutions — like Applied to ensure more equitable hiring — and machine learning and AI-enabled services might be able to support human decision-making in ways that lead to better overall outcomes… except that biases can also be baked into technologies, and are often deeply embedded and reinforced within organizational systems to feel “natural” or “what good looks like”.

How can we better ensure that “nudge” solutions or data-driven decisions are not stopping short, or inadvertently reinforcing inequitable systems, but instead using behavior and data for powers of good?

How can we most effectively balance what’s measurable (i.e. actual behaviors) with user context or what didn’t happen (i.e. how do data and BE approaches best address problems of non-participants, such as helping the unbanked, who don’t leave a data trail?)

Who decides, what defines, “in my (user) best interest” when gauging outcomes for individuals v. society, or now v. the future?



Design, data and behavior: Greater than the sum of their parts

It’s abundantly clear that ‘data’ (or ‘digital’) is more than a fad, but a way of doing business both operationally and strategically. Behavioral economics principles are increasingly recognized as key contributors to smart solutions, and design has also earned its seat at the table organizationally and in a variety of content domains. Yet many organizations are still struggling to get the best out of these disciplines—either singly or in combination—in consumer-facing solutions, or internally in “back of house” functions like HR.

What have we seen work as best practices in integrating functions of design, data, and behavior into organizations (or with clients), and what gets in the way?

What myths exist about these disciplines, and who is best situated to own or challenge current mindsets and orthodoxies?

Maturing fields often veer toward specialization… what does this mean for both established and aspirational design, data, and behavioral professionals?



The future starts now… what’s next?

In the spirit of “just around the corner…” and what is discussed by the panelists: where do we see the intersection of the fields of design, data, and behavior going next?

What (or who) is going to feed the next wave of tools, methodologies, theories, and applications across these domains?

What does this shift mean for the designer and developers of products, services and systems, as well as end users?

How might the unintended consequences of existing approaches — some of which we are already all too aware of today — inform the next wave or direction of work in these fields, and what does that mean for us gathered here?