2 Comments
16 hrs ago·edited 16 hrs agoLiked by Pascal Hetzscholdt

I agree in general that a focus on human accountability, responsibility and capability is required, and government-mandated certification requirements may be needed to get there. However we should be careful to carve out the specific risks to address, as to not stiffle the vast innovation potential in reasearch, small companies and indie devs. For example does a copywriting AI tool require the same degree of diligence as a medical device AI does? Does an autonomous car AI pose the same kind of risk as a talk-to-your document AI? I think not. While the medical and civil engineering fields may offer interesting insights on how to approach effective regulation, these fields have an entirely different risk profile, across the board. Software is not the same.

P.S. I liked your toughts, didn't bother to read any of the AI blurbs though.

Expand full comment
author

Agree with your views. FYI the blurbs tend to be quite informative, especially when generated as a result of the analysis of an actual paper or an actual research article. There is a lot in there that can and probably will inspire.

Expand full comment