Header image

Session 4B

Tracks
Track 2
Thursday, December 4, 2025
15:00 - 17:20

Speaker

Agenda Item Image
Mr Matias Aleksanteri Eilola
Doctoral Researcher
University of Jyväskylä

Implementing the AI Hype into the Workplace: The Discursive Construction of the AI Consulting Market

Abstract

A claim such as 'The age of AI is now, and you're falling behind' can be a worrying thought for many companies. A simple solution might be to bring in a consultant who promises to guide you into this new era. This study explores how AI consulting firms discursively construct the market for their services as part of contemporary techno-economic developments.

Consulting organizations wield significant influence over policy changes (Schlögl, Weiss, & Prainsack, 2021) while also exhibiting blind spots in their practices (Monod et al., 2024) when engaging in discussions on technological advancements in the workplace. With LLMs emerging as a new business opportunity (see Mohan, 2024), the question arises: how is this influential industry discursively constructing the need or want for AI consulting?

In this presentation I´ll explore how AI consulting organizations create and shape the market itself. This study analyzes marketing texts from 14 Finnish consulting organizations specializing in AI and 10 focused on digitalization. Using discourse and frame analysis as a methodological framework, I examine central discourses through frame setting, focusing on how different voices are framed within them.

The results illuminate how consultants frame the rising AI-consulting market and assert expertise in a technology still in its infancy compared to similar practices in established consulting markets.

These results are explored within broader market landscape where a technological breakthrough has created a new Wild West for value-seeking opportunities.

Keywords: Artificial intelligence, consulting, framing, discourse

References

Mohan, S. K. (2024-01-01). Management Consulting in the Artificial Intelligence – LLM Era. Management Consulting Journal, 7(1), 9-24. https://doi.org/10.2478/mcj-2024-0002

Monod, E., Korotkova, N., Khalil, S., Meythaler, A., & Joyce, E. (2024-12-01). Unveiling blind spots in consulting firms’ disseminating discourse on digital transformation. Information systems and e-business management, 22(4), 759-803. https://doi.org/10.1007/s10257-024-00687-x

Schlogl, L., Weiss, E., & Prainsack, B. (2021-11). Constructing the ‘Future of Work’: An analysis of the policy discourse. New technology, work, and employment, 36(3), 307-326. https://doi.org/10.1111/ntwe.12202
Riikka Nissi
University of Jyväskylä

Simulating political dialogue with AI: Citizens’ questions to the candidates’ AI clones in Finnish presidential election

Abstract

Recent technological developments have given rise to conversational AI that blurs the boundaries between human and non-human agency (Leonardi 2023). In this presentation, we explore the dynamics of human-AI interaction in the context of political discourse, namely, election campaigns. The bedrock of a democratic society lies in people’s freedom to form opinions based on correct information and select their representatives for societal decision-making. However, similar to other sectors of society, politics is undergoing a transformation due to AI technologies – even so that political debates may be run by the politicians’ AI replicas. Our data presents such a scenario: it comes from an experimental study where citizens could discuss with the candidates’ public AI chatbot clones during the Finnish presidential election in early 2024.

Using digital conversation analysis as our methodological approach, we examine ‘insisting questions’ where the citizen poses a question to the AI candidate and then corrects the response given. Our results show that similar to political interviews (e.g. Berg 2003), insisting questions imply accusations whose legitimacy is negotiated during question-answer sequences. The accusations draw on the politicians’ public image and AI candidates are able to unpack them in their responses, followed by the citizen’s third turn correction that attributes intentionality to the AI and displays mistrust towards it as a real political figure.

Our study sheds light on the nature of human-AI interplay that is increasingly prevalent across society and may profoundly transform the concepts of knowledge, trust and moral order in political and other institutional discourse.
Phd Silje Susanne Alvestad
Associate Professor (nduc) And Researcher (uio)
University of Oslo

Human – chatbot interaction: The case of disinformation in large language models

Abstract

The recent developments within large language models (LLMs) have been extremely rapid, with the launch of ChatGPT in November 2022 as a major milestone. LLMs open new opportunities (Bommasani et al., 2021), but also amplify challenges, particularly in the context of online disinformation (Goldstein et al., 2023). Bad actors easily and cheaply use LLMs to produce synthetic content at scale, which threatens to significantly increase the volume of disinformation online. Adding to this threat is the increasing use of LLMs as interfaces for online information retrieval. Despite improvements LLMs are prone to bias and hallucination, thereby increasing the risk of users unintentionally generating, spreading and amplifying disinformation (Brandtzaeg et al., 2024).
Given the rapid increase in AI-generated text online, we need to know more about this language in general, and its persuasive potential and power in particular. In this study we combine insights from the language, media and computer sciences to investigate (i) how persuasive and pragmatic features of human-produced language hold up for AI-generated language and (ii) how LLMs may increase or decrease the persuasiveness of disinformation. For this, we prompt various LLMs to generate news items that are comparable to the human-generated ones used in Trnavac et al. (2024). We propose that the tendency towards a specific set of persuasion techniques is less clear, while the language itself is no less persuasive. This will be tested through a combination of human feedback on persuasiveness, as in (Thomas et al., 2019), and through automated analyses based on AI models and algorithms.
Prof Numa Piers Markee
Emeritus Research Professor of Linguistics
University of Illinois Urbana-Champaign

CA-Squared: Advances and Applications of Researcher-first Computer-Assisted Conversation Analysis

Abstract

Creating multimodal conversation analytic (MMCA) transcripts can require 60 + hours of transcription time/hour of raw video data. Here, we present a new automated “CA-Squared” approach under a single design methodology. Using advances in AI and algorithmic approaches, we develop researcher-centric methodologies to enhance researchers’ productivity that better support specific research questions without sacrificing accuracy. Our system combines new advances in AI, large language models, and computer vision to detect certain features of embodiment commonly studied in MMCA. Specifically, we develop and use coherent spatial and temporal segmentation techniques to analyze multi-participant video streams and empirically demonstrate the applications of modern computer vision techniques that exceed the performance of naively used AI models. These applications include the identification of embodied features that are commonly included in multimodal transcripts (eye gaze phenomena, facial expressions such as thinking faces, hand and other gestures, and different manifestations of body torque) which are particularly useful in developing rigorous analyses of socially distributed cognition. To preserve privacy, we demonstrate computer-assisted CA using only local computational resources; we do not send audio or video data to remote API, free or paid services. We also present and evaluate detection approaches suitable for both extended (>0.1s) behaviors such as thinking faces and much briefer events (<0.1s) such as eyebrow flashes and breaks in mutual eye gaze. Finally, we discuss some exciting and unexplored benefits for MMCA research including language use and (second) language learning and explore the potential cross-cultural advantages of using CA-Squared versus traditional CA approaches.
Dr. Jakub Mlynar
Academic Associate
HES-SO Valais-Wallis University of Applied Sciences Western Switzerland

Embodying driverless shuttles

Abstract

From 2016 to 2022, ten pilot trials of automated vehicles (AVs) for public transport took place in Switzerland, but their final reports rarely acknowledge the routine work done by the human staff. In 2024, we conducted 25 video-recorded interviews with former safety operators of AVs, mostly at the original pilot trial sites. A recurrent phenomenon observed in the recordings is that interviewees produce bodily conduct representing the vehicle’s movement alongside their narrative accounts. Grounding our insights in ethnomethodology and conversation analysis, and contributing to studies of mobility and AI (e.g., [1–2]), we focus on two distinct ways this is done.

First, in schematic embodiment, the interviewees represent the driverless shuttle gesturally (using their palms or objects such as smartphones) to illustrate the AV’s coordination with other traffic members. This typically demonstrates coordination issues, such as overtaking or giving way. Secondly, in experiential embodiment, the entire body is used to re-enact how the AV’s movements are experienced by the human on board. This typically demonstrates problematic aspects of riding with driverless shuttles, such as abrupt emergency braking.

Our paper explicates both types of embodiment through video excerpts, analysing their sequential detail and relationship to similar practices (e.g., [3–4]). We show how demonstrating features of a non-present technology in an embodied way minimizes the technical character of the recounted experience. The ‘autonomous agency’ of a machine thus emerges from specific situations of its use, relying on the skilled work of human participants.

REFERENCES
[1] https://doi.org/10.4324/9780429276767-23
[2] https://doi.org/10.1007/s00146-024-01919-x
[3] https://doi.org/10.1075/ld.00015.lef
[4] https://www.inlist.uni-bayreuth.de/issues/58/inlist58.pdf
Giuanna Caviezel
Phd Student And Lecturer
UZH

Mobile discursive place-making on the ocean: Multisensory semiotic landscapes as safaris in the Arctic

Abstract

This study responds to the need for research that links mobility and tourism studies, specifically in place-making (Hays 2012), and accounts for an increased focus on multisensory aspects of semiotic landscapes (Pennycook & Otsuji 2015). We explore the concept of “safari” in relation to Norwegian cruise tourism in the Arctic, a notion that has been conceived as a process that orders the landscape by movement across it and is produced by the spectacle of nature (Urry 1990).
We base our analysis on ethnographic data from semiotic landscapes on a Hurtigruten ship (Mohr & Ackermann-Boström 2023), perception questionnaires with international travelers (Mohr et al. 2023) and interviews with Norwegian travelers (Mohr 2024), analyzed in a multimodal discourse analytic framework. The data emphasize the importance of nature and wildlife in Norwegian cruise tourism. Various dynamics emerge:
1) A general foregrounding of “exotic” nature/wildlife.
2) A general semiotic emphasis on local authenticity in relation to domains like food, versus emphasis on Otherness regarding the journey as a whole, i.e., traversing the “wilderness”.
3) Targeting of different tourist groups semiotically with different animals and different languages, e.g., “safaris” offered exclusively in English, versus reference to smaller animals mostly provided in Norwegian. This is visible on, e.g., menus and posters, but also audible in soundscapes on board.
Altogether, our analysis demonstrates how a cruise creates dynamic place in a multisensory manner. This is based on references to and foregrounding of wildlife/nature in the semiotic landscape, but it is also enabled by imaginaries of the journey traversing space.
Prof. Sergei Kruk
Riga Stradins UNiversity

A Semiotics of Solidarity: How words constrain social interaction

Abstract

Social scientists operate with essentially contested concepts that require strong explicit definitions. To designate concepts, they use signs from the standard vocabulary. All native speakers understand the words ‘society’, ‘cooperation’, or ‘solidarity’ in banal communication, however, in specialized discourse they are used as terms that connote different meanings. To complicate matters, the meaning varies between disciplines and paradigms. This article analyzes the interdiscursive mistranslation of sociological terminology in the Latvian language. Empirical material consists of a corpus of documents on social solidarity. Since the establishment of the Republic of Latvia in 1991, ethnic and linguistic diversity has been perceived as a threat to national sovereignty. Under the pressure of the European Union, since 2001, the government has published policy documents introducing special measures to increase social solidarity. Policy makers adapt English-language terminology and refer to the authority of classical and contemporary scientists to justify state interference in private life. Critical reading of propositions reveals inconsistent usage of terms, erroneous translation of original texts, and a lack of analytical approach to argumentation. The contextual meaning of academic terminology suggests that policy makers mistranslate the 20th century sociology in the framework of the 18th century historicism that is a foundation block of ethnocultural definition of nation. Contemporary political discourse treats ethnocultural homogeneity as a necessary condition for solidarity. In the genre of government document, this discourse shapes legal institutions that enforce modes of social interaction contradicting the reality of modern pluralistic societies.
loading