Dear AI ethics enthusiasts,
The fight against data theft through AI training now feels endless. Everywhere I turn, I need to dig through privacy policies and menus.
The problem is not only that training on my data exploits my IP and violates my privacy. Even when I do manage to turn AI training off for myself, I am bothered by the fact that services I benefit from violate other people’s IP and privacy rights, probably without them even knowing it.
Spell checkers are often left out of this conversation, but they are crucial because they can go over everything that one types. This is extremely dangerous. Someone who obtains this data, legally or not, can create personal and professional versions of you. To name just two risks, these can be used for fraud and to develop chatbots that will compete with you professionally: offer your services, using your words, without any compensation to you.
Yesterday, I went down the privacy policy rabbit hole with Grammarly for my own use, which made me curious about its competitors. So today, I’m sharing the training policies of 5 spell checkers. (See my previous audits of chatbots and social media).
My Grammarly review is open to all, and paid subscribers get reviews of 4 more (thanks for your support!). These are ProWritingAid, Ginger, Hemingway Editor, and WhiteSmoke. The state of affairs is grim. Only one of the services commits not to train on user data. Or, actually, on the flip side, not all spell checkers train on user data! :)
For dessert, an AI-generated take on this post!
Grammarly
Grammarly trains on texts of individual accounts outside of the EU. Here’s their policy, and I found complaints about it on Reddit from a year ago. The good news is that you can opt out through the privacy settings. The bad news is that this toggle probably only applies to the future, so your past data is still theirs.
Here’s where to opt out:
(ProTip: Also go into “tailored assistance” to turn off additional data collection)