Meet the speakers
Read more about each speaker and their topic
All speakers
Performance advocate and counselor | podcaster | public speaker
United States
Bio
Leandro Melendez is a globally recognized performance testing advocate with over 20 years of experience in IT, and more than 10 years focused on performance and quality engineering. He has helped teams around the world elevate their performance practices and served major S&P 500 technology clients across multiple continents.
Leandro is an international speaker with keynotes, workshops, and talks at major software testing events. He’s the author of the popular performance testing blog Señor Performo, where he shares deep insights and practical strategies for testers and engineers. Señor Performo also hosts the Spanish-language PerfBytes podcast and creates engaging testing content on YouTube.
Session | “The Rise of ObserQAbility!”
Many testers feel that testing is somehow broken. But testing is not broken, just incomplete. Especially in a world of distributed systems, elasticity, ephemeral cloud-native architectures, and constant deployment, QA needs a new edge.
In this keynote, Leandro introduces ObserQAbility, a new mindset that blends observability with modern QA practices. The future of QA belongs to those who don’t just test software. They observe it!
Through practical insights and memorable live examples, you’ll see ObserQAbility in action and understand that the future of QA belongs to those who don’t just test software, but observe it.
Senior Architect | Quality Engineering
Congleton, England, United Kingdom
Bio
Richard Bradshaw is an industry leader in software testing & quality engineering. He has a strong focus on automation, AI and whole team quality approaches. Richard is an experienced keynote speaker, teacher, strategist, leader and likes to describe himself as a generally friendly guy. He shares his passion for testing through consulting, training and giving presentations on a variety of topics relating to testing. With over 15 years of testing experience, he has a lot of insights in the world of testing and software development.
Session | ”Rethinking the Automation in Testing Principles”
Test automation is often judged by the tools we use or the number of tests we run, but real impact comes from how thoughtfully we apply it. The Automation in Testing principles were originally designed to shift focus from shallow uses of automation toward enabling it in ways that truly support testing through strategy, creation, usage and education. In this keynote, Richard revisits these principles in today’s software world where AI, complex systems and new ways of working are questioning current test automation approaches.
Richard will explore how these principles have already helped teams achieve excellence in testing outcomes, stronger collaboration and deeper recognition of the value test automation can bring. He will also share lessons learned and industry observations where these principles have been neglected, offering a candid look at the gap between adopting automation and achieving impact with automation.
Building on this reflection, Richard introduces a refreshed set of principles designed for todays testing challenges and those in the near future. These principles support a mindset for approaching test automation that helps you assess and maximize impact, ensuring automation not only supports your development teams but also delivers real business value.
- Analyse how the original Automation in Testing principles influence testing effectiveness, team collaboration, and recognition of automation value in modern software teams.
- Evaluate real world scenarios where adherence or neglect of these principles has impacted testing outcomes, identifying gaps between automation adoption and meaningful impact.
- Apply the new Automation in Testing principles to your existing context to help maximise automation’s impact to both development teams and business.
Senior Data | DWH Consultant
Bulgaria
Bio
Valery Penev brings over a decade and a half of experience in data warehouse consulting and data engineering at Adastra. Throughout his career, he has worked across diverse industries, clients, and roles. In recent years, his focus has expanded to include cloud technologies. Valery is goal-oriented and analytical, with excellent interpersonal skills. For eight years, he served as a Talent Manager, mentoring a team of ten, and since 2016 has lectured at the Adastra Academy. He spent almost a decade playing a key role in technical interviews as part of Adastra’s hiring process
In 2020, he co-founded “Out of the Box Ltd.” – a company focused on web services, digital marketing, and testing services, helping small and mid-sized businesses in building a strong online presence.
His superpower is to be creative and always positive.
Session | ”To test, or not to test – that is the question.”
Inspired by Shakespeare’s timeless question, this talk dives into the modern tester’s most significant dilemma: deciding what, when, and how to test in a world of limited time, shifting priorities, and constant delivery pressure.
Through a blend of practical examples, humor, and a touch of drama, we’ll explore how testers can balance intuition with data, speed with quality, and automation with exploration. Attendees will walk away with actionable heuristics and mental models for making smarter, context-driven testing decisions – mainly when “it depends” is no longer enough.
Because great testing isn’t just about finding bugs – it’s about making choices that move projects, teams, and people forward with purpose and confidence.
Intended audience: Testers at every level
Takeaways:
1. Testing is not binary – it’s strategic. Learn to ask “What matters most right now?”
2. Discover simple heuristics to guide smarter test decisions in high-pressure environments.
3. Have a little fun while embracing the complexity and art of testing judgment.
Coach, Author
York, England, United Kingdom
Bio
Katja is the founder of Kato Coaching and a respected voice in the testing and quality engineering community. With years of experience helping teams improve software delivery and testing strategy, she brings a sharp perspective on how quality evolves in modern tech organizations.
Session | ”AI didn’t break testing. It just made bad decisions faster.”
As AI becomes embedded insoftware delivery pipelines, testing is often one of the first areas where teams experiment. The results are mixed: impressive demos, fragile outcomes, and growing uncertainty about what can be trusted.This talk reframes AI in testing as a quality and decision problem rather than a tooling problem. It explores what AI is genuinely good at, where it struggles, and how teams can reason about risk before integrating it into their testing workflows.Attendees will be introduced to a practical framework for deciding how and where AI can be used as an accelerator, while keeping accountability, intent, and quality firmly in human hands.
Test Consultant
Gothenburg, Sweden
Bio
Mert Yurdakul is a Software Test Consultant at Test Scouts AB with a genuine enthusiasm for software quality and AI. Before his current role as a test automation engineer in the automotive industry, he developed a proof-of-concept full-stack AI application for defect analysis, which he had the opportunity to co-present at the international QA&TEST Embedded 2024 conference.
He enjoys bridging academia and industry by supervising thesis projects in collaboration with Chalmers University of Technology and University of Gothenburg, drawing on his own research background in the pharmaceutical sector and his studies at Chalmers.
Session | “New Character Unlocked: The Reputation Engineer”
Solving the test oracle problem, determining the correct output for a given input, has become even tougher since Generative AI (GenAI) entered in the software development life cycle. Nondeterministic nature of its outcomes flips the Agile Test Pyramid: More need for the late testing efforts, i.e. manual testing, end-to-end (E2E) testing while reducing volume of unit tests and integration tests.
This shift leaves us with urgent questions: How do we ensure quality of the system under test when GenAI is highly involved in development? How do we add value when GenAI is capable to do so much? And where do testers fit in when developers are taking on more implementation-level checks?
While searching for these answers, we will explore the competencies required to build the ultimate tester of the future. We will introduce the ’Reputation Engineer’—a professional construct that redefines the tester’s role from an executor to a strategic architect of trust and quality.
Designed for testers, developers, and leaders, this talk offers a mindset for evolving traditional quality practices to meet the demands of AI-powered software lifecycles.
Practice lead
Dokkum, Friesland, Netherlands
Bio
Willem K. is a technology professional with a strong interest in software quality, testing practices, and modern development processes. Through his work in the tech industry, he has gained valuable experience collaborating with cross-functional teams to improve software reliability, streamline testing workflows, and contribute to the delivery of high-quality digital products.
Willem is passionate about exploring how testing evolves alongside modern technologies, agile development, and continuous delivery. He enjoys sharing insights about the role of quality in fast-moving development environments and encouraging teams to rethink how testing can add value throughout the entire software development lifecycle.
Test Specialist & Competence Lead
Greater Groningen Area
Bio
Arnoud Gorter is a software testing professional and Test Specialist & Competence Lead at the Ministry of Justice and Security in the Netherlands. He is passionate about improving software quality, test automation, and collaboration within testing teams. Arnoud actively contributes to the testing community by sharing knowledge through workshops, conferences, and community events, where he focuses on practical approaches to building effective and maintainable automated testing strategies.
Workshop | ”Automate smart, not hard ”
You are part of a new DevOps team working for the infamous KafkaCorp, testing is mostly done ad-hoc and there is no test automation strategy in place. You have inherited an application landscape, and the test automation set that comes along with it. There are many tests present and we mean MANY. The effectiveness of these tests is not measured. You are not sure which risks are covered by the tests and what these tests tell about the quality of the software. Time to get to work!
Let’s start from another perspective, instead of building a dream world from scratch we start with a nightmare. The metaphor for our test set is a baseplate full of multi-colored LEGO bricks in all shapes and sizes. Not structured at all. In three fast-paced rounds we transform our test automaton set from into a smart one.
In the first round, we start by taking a step back; “What are we doing here?” “What is the purpose of this automation set?” “Which risks are covered by this set?” Let’s start trimming our sets by removing the unnecessary bricks.
In the second round we introduce new bricks with different types and shapes, symbolizing new test techniques which can be used to breathe some fresh air into our test automation set.
In the final round, things are looking a lot better. Let’s see how we can keep it this way in the future. By introducing Monitoring and looking at the Traceability of our tests, we ensure what we are doing makes sense. We add new colored LEGO bricks and smaller sub baseplates to categorize and link the tests to the functionality.
Congratulations, you automated smart!
Our key takeaways
- Eliminating unnecessary tests
- Introduce new ways of automating tests
- Introduce traceability and monitoring to prevent chaos from returning.
What comes later.
Our takeaways provide an attendee with knowledge that is beneficial in optimizing their automation set. Taking a step back and evaluating their own test automation set as well as applying new automation techniques and added measurement will ensure that attendees will not only be skeptical about the tests they write but also how these fit in the bigger picture.
Structure
- Introduction and workshop purpose (10min)
- Round 1 (15 min) + (5 min evaluation) à Introduction, getting to know your team (What are we doing?!) ß What you need to get out of the first round (Narrow it down)
- Round 2 (15 min) + (5 min evaluation) (New techniques)
- Round 3 (15 min) + (5 min evaluation) (Traceability)
- Closure and questions (5 min)
- Slack time (15 min)
Developer Advocate
USA
Bio
Jenna is a software tester and developer advocate with over a decade of experience. They’ve spoken at a number of dev and test conferences and is passionate about risk-based testing, building community within agile teams, developing the next generation of testers, and A11y. When not testing, Jenna loves to go to punk rock shows and live pro wrestling events with their husband Bob, traveling, and cats. Their favorite of which are the 3 that share their home, Maka, Milton, and Excalipurr.
Session | ”A Note From Your User”
We often design for the ”happy path” a user who is calm, focused, and sitting at a desk. We don’t design for the user whose hands are shaking, whose vision is blurring, and whose brain is operating on a fraction of its usual capacity. But when software is a medical necessity, your users encounter your work while they are scared, sick, and vulnerable. In this state, a ”glitch” isn’t just a ticket in a backlog; it is a moment of profound abandonment.
In this session, I share a raw, first-person experience report on navigating a diabetes diagnosis through the lens of using a Continuous Glucose Monitor (CGM). While CGMs are life-saving innovations, the reality of a ”buggy” device takes on a terrifying weight when it results in missed lows and dangerously inaccurate readings. Using the Think, Feel, Say UX framework, we will step through the psychological and physiological toll of relying on a device that you can no longer trust.
We will explore the moments where the ”technical requirements” were met, but the human requirements were ignored. If your system fails when a user is at their most compromised, you haven’t just missed a requirement, you’ve failed a person in crisis.
Key takeaways include:
- Defining the High-Stakes Bug: Learning to identify when a bug is a minor inconvenience versus when it becomes life-threatening.
- The ”Think, Feel, Say” Crisis Map: A deep dive into the user’s internal state during a device failure.
- Designing for the Compromised User: Practical strategies for building empathy into error states, alerts, and data visualization for users in distress.
This isn’t just a talk about medical devices; it’s a call to action for every creator to recognize the human pulse behind every data point and the weight of the responsibility we carry as builders.
Test lead and test consultant
Sweden
Bio
Bengt Augustsson, co‑founder of Test Scouts, has been a steady presence in the Swedish testing community for years. He’s helped shape conferences, meetups, and testing initiatives, and he has a knack for creating environments where testers feel welcome, curious, and willing to experiment. Bengt mixes deep experience with a relaxed, lightly mischievous humour — the kind that puts a room at ease without ever distracting from the work. He’s also very good at keeping things running smoothly, even when everyone else is quietly pretending everything is fine.
Test consultant
Sweden
Bio
Robert Hennersten‑Manley has spent his career leading testing and quality work across a mix of teams, domains, and delivery styles. His approach is practical, calm, and quietly analytical — he can take a messy problem, straighten it out and make everyone wonder why it ever looked complicated. Rob brings a dry sense of humour, a strong instinct for the human side of testing, and a focus on helping people feel capable rather than overwhelmed. He’s at his best when guiding people through new territory whilst making it feel fun.
Workshop | ”AI as Strategic Test Assistance”
Who this is for?
This workshop is for testers who feel unsure about AI, tried it early on and stepped away, or simply haven’t seen how it fits into real, day‑to‑day testing. It’s designed as a clear, low‑pressure entry point – something you can walk into without prior experience and walk out of with practical takeaways you can use immediately at your desk.
It’s not aimed at people already deep into AI‑assisted testing. It’s for those who want a guided, human‑centred introduction that makes the whole thing feel understandable rather than overwhelming.
What the workshop covers
We treat AI as a test assistant – helpful, quick, sometimes insightful but always with human oversight. You’ll start with your own ideas about how to test an object we provide, then we’ll gradually introduce AI into the process so you can see where it genuinely supports your thinking.
Across the session, you’ll explore different levels of involvement, from simple prompting to more structured collaboration. You’ll get straightforward guides, prompt patterns, and the key “watch outs” for when the model starts drifting into fiction.
The goal is to build your confidence in using AI, understand where it adds value to your testing and recognise where it doesn’t.
Founder, Security Expert
Stockholm Sweden
Workshop | ”Testing the Unpredictable: Building Security Test Plans for AI-/LLM Systems ”
AI and LLM-based applications introduce a fundamentally different security landscape compared to traditional software systems—one shaped by probabilistic behavior, dynamic inputs, and complex model interactions. This workshop provides a practical introduction to testing these systems using guidance from the OWASP Testing Guide for AI/LLM.
We begin with an overview of why AI systems challenge conventional security assumptions, highlighting key differences such as non-deterministic outputs, prompt-driven behavior, and emerging attack vectors like prompt injection and data exfiltration. Participants will then be guided through the structure and principles of the OWASP Testing Guide, including how to identify risks, define test objectives, and approach validation of AI-specific threats.
The core of the workshop is hands-on: attendees will develop their own security test plan for an AI/LLM-based application, applying OWASP methodologies to real-world-inspired scenarios. By the end of the session, participants will have both a deeper conceptual understanding and a concrete artifact they can use to strengthen the security of AI systems in practice.
Session | ” When AI Goes Wrong: Real-World Security Failures in LLM Applications”
AI-powered applications rapidly move from experimentation to production, security risks are no longer theoretical—they are already being exploited in the real world. This presentation explores a series of actual security incidents involving AI and LLM-based systems, revealing how these failures occurred and what made them possible.
Each case study is paired with a clear, technical breakdown of the underlying vulnerability, connecting real-world attacks to key concepts such as prompt injection, data leakage, and insecure integrations. Drawing heavily on frameworks like the OWASP Top 10 for AI Applications, the session bridges the gap between practice and theory.
Attendees will gain a deeper understanding of how AI systems can be manipulated, what common patterns attackers exploit, and—most importantly—how to design and defend LLM applications more securely in an evolving threat landscape.
Head of QA Business Practice
Poland
Bio
Kinga Michalik has been working in software testing for over 25 years, gaining experience across small, medium and large-scale projects. Throughout her career, she has taken on roles such as tester, test coordinator, test manager, business analyst, Scrum Master and Product Owner. She has worked across multiple industries, including banking, insurance, finance, aviation and telecommunications.
Currently, she serves as Head of QA Business Practice, where she leads a team of over 20 testing professionals. She specializes in building and maintaining test strategies, processes and documentation, and has extensive experience in leading QA teams and delivering complex testing initiatives. She also has practical experience in test automation and non-functional testing.
Kinga is an active trainer and speaker, delivering workshops and training sessions in software testing and SQL, as well as presenting at conferences and industry events. She combines strong technical expertise with excellent communication skills, supported by her educational background in informatics and public speaking.
In her spare time, she enjoys reading books, hiking in the mountains, packrafting and exploring Poland by bike.
Workshop | ”Building effective Test Plans in agile, dynamic environments ”
Tester Extraordinare, Director
Helsinki
Bio
Maaret Pyhäjärvi is an exploratory tester extraordinaire and Director, Testing Services at CGI. She is an empirical technologist, a tester and a (polyglot) programmer, a catalyst for improvement, a speaker and an author, conference designer and a community facilitator. She has been awarded the two prestigious global testing awards, Most Influential Agile Testing Professional Person 2016 (MIATPP) and EuroSTAR Testing Excellence Award (2020), selected as Top-100 Most Influential in ICT in Finland 2019-2025 and awarded Tester Worth Appreciating Award in Finland 2022. She’s spoken at events in 28 countries delivering over 500 sessions. With 29 years of exploratory testing under her belt, she crafts her work role into a mix of hands-on testing and programming, and leading and enabling others. She leads TechVoices enabling new speakers, blogs at https://visible-quality.blogspot.fi and is the author of three books: Ensemble Programming Guidebook, Exploratory Testing and Strong-Style Pair Programming.
Workshop | ”Architecture-aware Exploring of Programmatic Interfaces ”
The API testing conversation is dominated by tool tutorials and automation frameworks. What is missing is the thinking layer – the architectural awareness and structured perspectives to learning while collecting information, that turn a capable technician into a tester that finds what matters. This workshop fills that gap through experiental learning and practice.
The core question with testing is results coverage – are we finding the problems there are to find? With modern architectures and API-thinking, our options for exploratory testing are greatly increased, but how do we do this systematically? APIs – or programmatic interfaces – can be found as the exposed ways of packaging our business logic but also on level of method signatures in unit testing. If 77% of production failures can be reproduced by a unit test, the architecture awareness allows us to shift down on our testing, exploring on APIs as well as units, increasing the opportunities of technical collaboration within our teams.
What participants leave with:
- a habit of asking ”what’s behind this endpoint?” while testing it
- a perspectives checklist they have already used to explore an API
- examples they can use as reference in their work
Test Coast 2025