I'm so sorry Fordman...I thought I replied to you.
Thank you very much for your words on my writing. I keep working on it and hope it flourishes more over time.
Automation with intelligence is a scary thing as computers and the machinery they operate become ever-more advanced. Limitations can be written into the AI's rules, but, don't most humans get the same rules instilled over the growing years to understand the differences between right and wrong? How many of them ignore these instructions? Once a computer can have 'free-will' or 'discretionary-action' could it not decide which rules it wants to adhere to or not? Also, most humans have a fear of dying...will an AI computer have the same fear? When a human does 'spring a cog' and lose their control...they generally can only damage what's in their vicinity...a 'networked' AI can reach out far and wide.
Yes, it is fearful.
Thanks again...and again I am sorry. I thought I replied...and remember writing something to this effect on AI's...but I must not have hit post.