Artwork

コンテンツは CNA によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、CNA またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Player FM -ポッドキャストアプリ
Player FMアプリでオフラインにしPlayer FMう!

GPT Is My CoPilot

35:26
 
シェア
 

Manage episode 297636816 series 1932286
コンテンツは CNA によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、CNA またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal

Andy and Dave discuss the latest in AI news, including a report that the Israel Defense Forces used a swarm of small drones in mid-May in Gaza to locate, identify, and attack Hamas militants, using Thor, a 9-kilgram quadrotor drone. A paper in the Journal of American Medical Association examines an early warning system for sepsis, and finds that it misses out on most instances (67%) of cases, and frequently issued false alarms (to which the developer contests the results). A new bill, the Consumer Safety Technology Act, directs the US Consumer Product Safety Commission to run a pilot program to use AI to help in safety inspections. A survey from FICO on The State of Responsible AI (2021) shows, among other things, a disinterest in the ethical and responsible use of AI among business leaders (with 65% of companies saying that can’t explain how specific AI model predictions are made, and 22% of companies have an AI ethics board to consider questions on AI ethics and fairness). In a similar vein, a survey from the Pew Research Center and Elon University’s Imagining the Internet Center found that 68% of respondents (from across 602 leaders in the AI field) believe that AI ethical principles will NOT be employed by most AI systems within the next decade; the survey includes a summary of the respondents’ worries and hopes, as well as some additional commentary. GitHub partners with OpenAI to launch CoPilot, a “Programming Partner” that uses contextual cues to suggest new code. Researchers from Stanford University, UC San Diego, and MIT research Physion, a visual and physical prediction benchmark to measure predictions about commonplace real world physical events (such as when objects: collide, drop, roll, domino, etc). CSET releases a report on Machine Learning and Cybersecurity: Hype and Reality, finding that it is unlikely that machine learning will fundamentally transform cyber defense. Bengio, Lecun, and Hinton join together to pen a white paper on the role of deep learning in AI, not surprisingly eschewing the need for symbolic systems. Aston Zhang and Zack C. Lipton, and Alex J Smola release the latest version of Dive into Deep Learning, now over 1000 pages, and living only as an online version.

Follow the link below to visit our website and explore the links mentioned in the episode.

https://www.cna.org/CAAI/audio-video
  continue reading

116 つのエピソード

Artwork
iconシェア
 
Manage episode 297636816 series 1932286
コンテンツは CNA によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、CNA またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal

Andy and Dave discuss the latest in AI news, including a report that the Israel Defense Forces used a swarm of small drones in mid-May in Gaza to locate, identify, and attack Hamas militants, using Thor, a 9-kilgram quadrotor drone. A paper in the Journal of American Medical Association examines an early warning system for sepsis, and finds that it misses out on most instances (67%) of cases, and frequently issued false alarms (to which the developer contests the results). A new bill, the Consumer Safety Technology Act, directs the US Consumer Product Safety Commission to run a pilot program to use AI to help in safety inspections. A survey from FICO on The State of Responsible AI (2021) shows, among other things, a disinterest in the ethical and responsible use of AI among business leaders (with 65% of companies saying that can’t explain how specific AI model predictions are made, and 22% of companies have an AI ethics board to consider questions on AI ethics and fairness). In a similar vein, a survey from the Pew Research Center and Elon University’s Imagining the Internet Center found that 68% of respondents (from across 602 leaders in the AI field) believe that AI ethical principles will NOT be employed by most AI systems within the next decade; the survey includes a summary of the respondents’ worries and hopes, as well as some additional commentary. GitHub partners with OpenAI to launch CoPilot, a “Programming Partner” that uses contextual cues to suggest new code. Researchers from Stanford University, UC San Diego, and MIT research Physion, a visual and physical prediction benchmark to measure predictions about commonplace real world physical events (such as when objects: collide, drop, roll, domino, etc). CSET releases a report on Machine Learning and Cybersecurity: Hype and Reality, finding that it is unlikely that machine learning will fundamentally transform cyber defense. Bengio, Lecun, and Hinton join together to pen a white paper on the role of deep learning in AI, not surprisingly eschewing the need for symbolic systems. Aston Zhang and Zack C. Lipton, and Alex J Smola release the latest version of Dive into Deep Learning, now over 1000 pages, and living only as an online version.

Follow the link below to visit our website and explore the links mentioned in the episode.

https://www.cna.org/CAAI/audio-video
  continue reading

116 つのエピソード

Semua episode

×
 
Loading …

プレーヤーFMへようこそ!

Player FMは今からすぐに楽しめるために高品質のポッドキャストをウェブでスキャンしています。 これは最高のポッドキャストアプリで、Android、iPhone、そしてWebで動作します。 全ての端末で購読を同期するためにサインアップしてください。

 

クイックリファレンスガイド