8 Comments

Ted, I'm wary of taking up too much of your time. But, if you are interested, the link below is my attempt to address what I see to be the underlying problem, an outdated relationship with knowledge. Don't feel obligated to read this, but if you should do so, any and all feedback, suggestions for improvement etc are most appreciated.

https://www.facebook.com/phil.tanny/posts/pfbid028vNnknjphbS3kQdGW8eat6KDp1teZTfMu2TAtj6eKQUg1cVE6VgFekzy8g38cp4jl

Expand full comment

Thank you, this looks like a quite interesting substack, I'm happy to have found it.

You write, "How to create AIs aligned with human flourishing is currently an unsolved problem."

For the moment, let's assume we somehow learn how to ensure that the AI we create aligns with our values. I've not yet understood how this solves the problem. What is the plan for dealing with AI created by those who don't share our values?? Russia, China, Iran, North Korea, corrupt governments all over the world, criminal gangs etc.

Isn't the concept of AI safety basically a fantasy? Do AI developers sincerely not see that? Or, a more cynical theory, are they feeding us AI safety stories to pacify us while they push the industry past the point of no return? I really don't know. Interested to learn more.

Expand full comment
deletedDec 30, 2023Liked by Ted Wade
Comment deleted
Expand full comment