Where is the Mutually Assured Destruction (MAD) line for AI?

Everybody is reaching towards super intelligence. It’s an arms race just like Nukes in the 20th century. We all know that nukes absolutely can destroy the entire world after a certain number of warheads.

AI feels like the early days of nukes before anyone had that number to destroy everything.

There will come a day when you won’t need the best AI to destroy everything, just one good enough. Will the best one stop it?