Artwork

コンテンツは Curiosity Software によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、Curiosity Software またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Player FM -ポッドキャストアプリ
Player FMアプリでオフラインにしPlayer FMう!

Episode 14: AI-Powered Testing Practices with Alex Martins

55:34
 
シェア
 

Manage episode 404949743 series 3461985
コンテンツは Curiosity Software によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、Curiosity Software またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal

Welcome to episode 14 of the Why Didn’t You Test That? Podcast! In this episode, the Curiosity Software team, Rich Jordan and Ben Johnson-Ward, are joined by Alex Martins, VP of Strategy at Katalon, to discuss the implications and challenges of AI-Powered Testing.

This episode goes beyond the hype and marketing euphoria of AI, to weigh up productivity gains coming from GPT-4 and large language models (LLM) in the software quality space. Guest Alex Martins leads the conversation around the need to put the tester at the centre of AI-powered testing, and only then, start building out AI use cases and safeguards.

Where the development community has seen tangible gains in AI deployment, the uplift in AI-powered testing practices is just beginning. So, how will this impact software testing professionals? Also, how will SME knowledge evolve as organizations develop bespoke LLMs?

Ben Johnson-Ward argues, if artificial intelligence is used to create test outputs, then testers will have to evaluate the output of these tests to determine if they are correct. This approach may lead to a decrease in productivity as testers spend time testing the output of AI generated tests. Testers will be able to fine-tune their AI models and build out a broader toolkit. But what does this look like? While organizations are adopting AI in testing, there will also be impact on the metrics of repeatability, explainability, and auditability.

With this in mind, internal AI committees can establish rules to abate uncertainty. Rich Jordan follows up on Ben's point, explaining how from the human perspective, AI may be limited in determining if an application meets the needs of the users. In this use case, AI becomes the co-pilot, a new tool for experts to enhance collaboration, while testers remain primary-pilots. Repeatability is discussed as a characteristic that humans are comfortable with in testing, but can AI offer better alternatives to traditional methods of monitoring code changes and integration flows?

AI-powered practices in software testing and test coverage are still in their early stages. This requires ongoing collaboration, learning, and sharing of experiences among organizations and industry professionals.

Finally, the possibilities and potential benefits of AI are too significant to ignore, despite the discomfort and challenges it brings in delivering quality software, faster.

The Curiosity Software Podcast featuring Huw Price and Rich Jordan and the Curiosity team! Together, they share their insight and expertise to learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | iTunes | Google Podcasts | Amazon Music | Deezer | RSS Feed

  continue reading

17 つのエピソード

Artwork
iconシェア
 
Manage episode 404949743 series 3461985
コンテンツは Curiosity Software によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、Curiosity Software またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal

Welcome to episode 14 of the Why Didn’t You Test That? Podcast! In this episode, the Curiosity Software team, Rich Jordan and Ben Johnson-Ward, are joined by Alex Martins, VP of Strategy at Katalon, to discuss the implications and challenges of AI-Powered Testing.

This episode goes beyond the hype and marketing euphoria of AI, to weigh up productivity gains coming from GPT-4 and large language models (LLM) in the software quality space. Guest Alex Martins leads the conversation around the need to put the tester at the centre of AI-powered testing, and only then, start building out AI use cases and safeguards.

Where the development community has seen tangible gains in AI deployment, the uplift in AI-powered testing practices is just beginning. So, how will this impact software testing professionals? Also, how will SME knowledge evolve as organizations develop bespoke LLMs?

Ben Johnson-Ward argues, if artificial intelligence is used to create test outputs, then testers will have to evaluate the output of these tests to determine if they are correct. This approach may lead to a decrease in productivity as testers spend time testing the output of AI generated tests. Testers will be able to fine-tune their AI models and build out a broader toolkit. But what does this look like? While organizations are adopting AI in testing, there will also be impact on the metrics of repeatability, explainability, and auditability.

With this in mind, internal AI committees can establish rules to abate uncertainty. Rich Jordan follows up on Ben's point, explaining how from the human perspective, AI may be limited in determining if an application meets the needs of the users. In this use case, AI becomes the co-pilot, a new tool for experts to enhance collaboration, while testers remain primary-pilots. Repeatability is discussed as a characteristic that humans are comfortable with in testing, but can AI offer better alternatives to traditional methods of monitoring code changes and integration flows?

AI-powered practices in software testing and test coverage are still in their early stages. This requires ongoing collaboration, learning, and sharing of experiences among organizations and industry professionals.

Finally, the possibilities and potential benefits of AI are too significant to ignore, despite the discomfort and challenges it brings in delivering quality software, faster.

The Curiosity Software Podcast featuring Huw Price and Rich Jordan and the Curiosity team! Together, they share their insight and expertise to learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | iTunes | Google Podcasts | Amazon Music | Deezer | RSS Feed

  continue reading

17 つのエピソード

すべてのエピソード

×
 
Loading …

プレーヤーFMへようこそ!

Player FMは今からすぐに楽しめるために高品質のポッドキャストをウェブでスキャンしています。 これは最高のポッドキャストアプリで、Android、iPhone、そしてWebで動作します。 全ての端末で購読を同期するためにサインアップしてください。

 

クイックリファレンスガイド