PerFail 2026

5th International Workshop on Negative Results in Pervasive Computing

Co-located with IEEE PerCom 2026

March 2026 Pisa, Italy

“Learn from the mistakes of others. You can’t live long enough to make them all yourself.”
- Eleanor Roosevelt

ABOUT

Not all research leads to fruitful results, trying new ways or methods may surpass the state of the art, but sometimes the hypothesis is not proven or the improvement is insignificant. But failure to succeed is not failure to progress and this workshop aims to create a platform for sharing insights, experiences, and lessons learned when conducting research in the area of pervasive computing.

While the direct outcome of negative results might not contribute much to the field, the wisdom of hindsight could be a contribution itself, such that other researchers could avoid falling into similar pitfalls. We consider negative results to be studies that are run correctly (in the light of the current state of the art) and in good practice, but fail in terms of proving of the hypothesis or come up with no significance. The “badness” of the work can also come out as a properly but unfittingly designed data collection, or (non-trivial) lapses of hindsight especially in measurement studies.

We took the insights and discussion from last year and wrote a paper about the collected information. You can read the published manuscript in IEEE Pervasive Computitng here.

PerFail also has been featured in the Nature feature article "Illuminating 'the ugly side of science': fresh incentives for reporting negative results". You can read the article here.

CALL FOR PAPERS

The papers of this workshop should highlight lessons learned from the negative results. The main outcome of the workshop is to share experiences so that others avoid the pitfalls that the community generally overlooks in the final accepted publications. All areas of pervasive computing, networking and systems research are considered. While we take a very broad view of “negative results”, submissions based on opinions and non-fundamental circumstances (e.g. coding errors and “bugs”) are not in scope of the workshop as they do not indicate if the approach (or hypothesis) was bad.

The main topics of interests include (but are not limited to):

  1. Studies with unconvincing results which could not be verified (e.g. due to lack of datasets)
  2. Underperforming experiments due to oversights in system design, inadequate/misconfigured infrastructure, etc.
  3. Research studies with setbacks resulting in lessons learnt and acquired hindsights (e.g. hypothesis with too limiting or too broad assumptions)
  4. Unconventional, abnormal, or controversial results that contradict expectations of the community
  5. Unexpected problems affecting publications, e.g. ethical concerns, institutional policy breaches, etc.
  6. “Non-publishable” or “hard-to-publish” side-outcomes of the study, e.g . mis-trials of experiment methodology/design, preparations for proof-of-correctness of results, etc.

We also welcome submissions from experienced researchers that recounts post-mortem of experiments or research directions they have failed in the past (e.g. in a story-based format). With this workshop, our aim is to normalize the negative outcomes and inherent failures while conducting research in pervasive computing, systems and networking, and provide a complementary view to all the success stories in these fields.

Important Dates*

Paper submission: November 17 December 7, 2025 (extended)
(add to or )
Author Notification: January 5 January 15, 2026 (extended)
(add to or )
Camera-ready Submission: February 2, 2026
(add to or )
Workshop Date: March 2026
(add to or )
Paper submission: November 17 December 7, 2025 (extended)
(add to or )
Author Notification: January 5 January 15, 2026 (extended)
(add to or )
Camera-ready Submission: February 2, 2026
(add to or )
Workshop Date: March 2026
(add to or )
Paper submission:

November 17 December 7, 2025 (extended)

(add to or )
Author Notification:

January 5 January 15, 2026 (extended)

(add to or )
Camera-ready Submission:

February 2, 2026

(add to or )
Workshop Date:

March 2026

(add to or )

* All dates are AoE (check it here).

Please note that the camera-ready submission deadline is final and non-negotiable. No extensions will be granted.

TECHNICAL PROGRAM

14:00 - 14:05
Opening Remarks
14:05 - 15:00
Keynote: The Art of Being Right at the Wrong Time
Image
Claudio Cicconetti (IIT-CNR, Italy)

Claudio Cicconetti (PhD from University of Pisa in 2003) is a Senior Researcher at IIT-CNR, where he leads the |Quantum⟩ Lab. He has coordinated and contributed to numerous national and international R&D initiatives, receiving awards such as the Facebook Networking research grant (2021) and the Quantum Internet Application Challenge (2023). From 2009 to 2018, he held R&D roles in the industry. He participates in the editorial board of Computers Networks, IET Quantum Communication, and Computers and Electrical Engineering. He has co-authored over 80 publications and two international patents.

15:00 - 15:30
Paper Presentations
LLMs Explain't: A Post-Mortem on Semantic Interpretability in Transformer Models
Authors: Alhassan Abdelhalim (Universität Hamburg, Germany); Janick Edinger (University of Hamburg, Germany); Sören Laue and Michaela Regneri (Universität Hamburg, Germany)
Large Language Models (LLMs) are becoming increasingly popular in pervasive computing due to their versatility and strong performance. However, despite their ubiquitous use, the exact mechanisms underlying their outstanding performance remain unclear. Different methods for LLM explainability exist, and many are, as a method, not fully understood themselves. We started with the question of how linguistic abstraction emerges in LLMs, aiming to detect it across different LLM modules (attention heads and input embeddings). For this, we used methods well-established in the literature: (1) probing for token-level relational structures, and (2) feature-mapping using embeddings as carriers of human-interpretable properties. Both attempts failed for different methodological reasons: Attention-based explanations collapsed once we tested the core assumption that later-layer representations still correspond to tokens Property-inference methods applied to embeddings also failed because their high predictive scores were driven by methodological artifacts and dataset structure rather than meaningful semantic knowledge. These failures matter because both techniques are widely treated as evidence for what LLMs supposedly understand, yet our results show such conclusions are unwarranted. These limitations are particularly relevant in pervasive and distributed computing settings where LLMs are deployed as system components and interpretability methods are relied upon for debugging, compression, and explaining models.
Promises and Risks in using LLMs in Scientific Review
Authors: Koojana Kuladinithi (Hamburg University of Technology & Institute of Communication Networks, Germany); Konrad Fuger, Aliyu Makama, Andreas Timm-Giel, Maximilian Kiener and Jonas Bozenhard (Hamburg University of Technology, Germany)
The current scientific peer review process faces increasing strain and is becoming difficult to sustain. While large language models (LLMs) can help reviewers by summarizing papers, assessing clarity, and generating timely feedback, the research community remains cautious about relying on LLM based reviews. In this paper, we explore how LLMs could support and potentially improve parts of the peer review process. We examine both the benefits and limitations by analysing LLM generated reviews of our own previously published papers, comparing them with human reviewer feedback, identifying points of agreement and disagreement, and considering requirements for the responsible and ethical use of LLMs in peer review.
15:30 - 16:00
Coffee Break
16:00 - 16:45
Paper Presentations
Degradation of Feature Space in Continual Learning
Authors: Chiara Lanza (Centre Tecnologic de Telecomunicacions de Catalunya, Spain); Roberto Matheus Pinheiro Pereira (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC/CERCA) & Universitat Politècnica de Catalunya, Spain); Marco Miozzo (CTTC/CERCA, Spain); Eduard Angelats (CTTC, Spain); Paolo Dini (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Spain)
Centralized training is the standard paradigm in deep learning, enabling models to learn from a unified dataset in a single location. In such setup, isotropic feature distributions naturally arise as a mean to support well-structured and generalizable representations. In contrast, continual learning operates on streaming and non-stationary data, and trains models incrementally, inherently facing the well-known plasticity-stability dilemma. In such settings, learning dynamics tends to yield increasingly anisotropic feature space. This arises a fundamental question: should isotropy be enforced to achieve a better balance between stability and plasticity, and thereby mitigate catastrophic forgetting? In this paper, we investigate whether promoting feature-space isotropy can enhance representation quality in continual learning. Through experiments using contrastive continual learning techniques on CIFAR-10 and CIFAR-100 data, we find that isotropic regularization fails to improve, and can in fact degrade, model accuracy in continual settings. Our results highlight essential differences in feature geometry between centralized and continual learning, suggesting that isotropy, while beneficial in centralized setups, may not constitute an appropriate inductive bias for non-stationary learning scenarios.
Gesture recognition and hand pose tracking using wearable mmWave radars
Authors: Otso Luukkanen, Dariush Salami, Huseyin Yigitler and Stephan Sigg (Aalto University, Finland)
We address the recognition and tracking of hand pose via a wrist-worn mmWave sensor. Particularly, the radar monitors the motion of the flexor tendons on the lower part of the wrist. By using a 3D-Convolutional Neural Network (CNN) model, we achieved a mean detection accuracy of 90.6% on a 10-gesture dataset that was collected from 11 participants and split into separate training, validation and test sets. For person-independent classification, we trained the model via leave one-person-out crossvalidation on a 10 participants dataset and achieved only a mean accuracy of 32.0% with std of 14.4%pt, as well as large variations between subjects. We note the limitations of this system, e.g., the need for calibration for a new user, and address these issues by presenting an analysis of the dataset and proposing ways to further improve the model's performance. Specifically, dataset statistics suggest that the whole Intermediate Frequency (IF) signal bandwidth and Doppler velocity range were not fully utilized when the samples were recorded. Finally, we have made the gesture dataset publicly available.
Smart City Myths in the Wild: A Post-Mortem of an Edge AI Energy Failure in Wildlife Deterrence
Authors: Mirosław Hajder (University of Information Technology and Management, Poland); Piotr Hajder (AGH University of Krakow, Poland); Mateusz Liput and Janusz Kolbusz (University of Information Technology and Management in Rzeszow, Poland); Robert Rogolski and Lukasz Kiszkowiak (Military University of Technology, Poland); Lucyna Hajder (AGH University of Krakow, Poland)
Deploying an autonomous wildlife deterrence system in the Carpathian Mountains, we operated under the hypothesis that industrial Edge AI solutions-standard in urban surveillance-would remain effective in a wilderness environment. This assumption proved to be a critical engineering failure. This paper presents a post-mortem analysis of a deployment that suffered a total energy collapse in less than 24 hours. We demonstrate that the trivial solution to the energy deficit-hardware scaling-was unfeasible. The project was constrained by strict landscape regulations in protected areas, a requirement for high mobility (pastoral economy), and drastic temperature amplitudes (diurnal variations of 35°C), which degraded energy storage efficiency. Within these rigid constraints, our baseline architecture failed for two primary reasons: (1) Hydrometeorological Blindness: Local microclimates invalidated solar models due to prolonged and supernormative condensation on photovoltaic panels, driven by the proximity of wetlands and water bodies. (2) Trigger Storms: The system was overwhelmed not only by wind-induced vegetation movement but primarily by high activity of non-target fauna. In the baseline architecture, the lack of hierarchical filtering forced the continuous wake-up of the energy-intensive GPU module for neutral objects (e.g., small animals or livestock). This rapidly depleted the energy budget reserved for detecting large mammals. This work documents the process of emergency system re-engineering. We describe the transition from the failed "Always-On" paradigm to a hierarchical architecture. Only the introduction of cascaded wake-up and deep sleep modes allowed the system to survive in conditions unforeseen by our initial models.
16:45 - 17:30
Open Discussion
17:30 - 17:35
Closing Words

REGISTRATION

Each accepted workshop paper requires a full PerCom registration (no registration is available for workshops only). Otherwise, the paper will be withdrawn from publication. The authors of all accepted papers must guarantee that their paper will be presented at the workshop. Papers not presented at the workshop will be considered as a "no-show" and it will not be included in the proceedings.

Registration link: here

COMMITTEES

Organizing Committee

Image

Ella Peltonen University of Oulu

Image

Malte Josten University of Duisburg-Essen

Image

Peter Zdankin University of Duisburg-Essen

Image

Tanya Shreedhar TU Delft

Steering Committee

Image

Nitinder Mohan TU Delft

Technical Program Committee

Image

Ambuj Varshney National University of Singapore

Image

Asanga Udugama Universität Bremen

Image

Daniela Nicklas University of Bamberg

Image

Gürkan Solmaz NEC Labs Europe

Image

Isabel Wagner University of Basel

Image

Jan S. Rellermeyer University of Hannover

Image

Jon Crowcroft University of Cambridge

Image

Jörg Ott Technical University of Munich

Image

Jussi Kangasharju University of Helsinki

Image

Lik-Hang Lee The Hong Kong Polytechnic University

Image

Oliver Gasser IPinfo

Image

Roman Kolcun University of Cambridge

Image

Sandip Chakraborty Indian Institute of Technology

Image

Simone Ferlin Red Hat

Image

Stephan Sigg Aalto University

Image

Torben Weis University of Duisburg-Essen

Image

Vitor Fortes Rey DFKI