After Twitter extended its character limit from 140 to 280 in November 2017, there was a bit of an uproar. Stephen King and JK Rowling were among the voices arguing that the 140 character constraint actually spurred creativity. Others speculated that the enforced brevity mitigated information overload. Some even resorted to using a Google Chrome extension to cap their tweets at the original 140 characters.

Twitter thought the tweak would boost user engagement, satisfaction, and ease of expression. Aliza Rosen, a Twitter product manager, and Ikuhiro Ihara, a senior software engineer, noted that the extension would apply to languages that require more characters per word. “…in languages like Japanese, Korean, and Chinese you can convey about double the amount of information in one character as you can in many other languages, like English, Spanish, Portuguese, or French,” they wrote.

Now, the company has analyzed the impact of the increased character limit and released the data. Some of the results are surprising. The number of tweets with a question mark has increased by 30 percent. Why would this be? How does a seemingly unrelated change—a higher character limit—impact the ways that we have conversations?

Questions work best when they are well-framed and contextualized. It could be that users previously felt as though they were only able to scratch the surface of a topic, without the ability to explore rhetorical or sincere inquiry.

Communication is a rich process, infused with powerful traits such as nuance, tone, and grammatical intricacies. The character limit increase has actually led to an upsurge in politeness. Data indicates that 54 percent more tweets use the word “please.” Additionally, users were freed up to express their appreciation—“thank you” is up by 22 percent.

The change hasn’t led to the loquaciousness that people feared. Globally, only 6 percent of all tweets run over 140 characters. In English, only about 1 percent of tweets reach the 280-character limit.

Of course, this one change is best analyzed within a wider context of policies and practices. Social media platforms make many decisions that impact the state of discourse. Some platforms have inadvertently enabled users to flag posts they disagree with as inappropriate, even though those posts don’t necessarily violate the terms of service. In other instances, real users have been mistaken for Russian bots. Social media platforms don’t always provide sufficient clarity or avenues of recourse in these situations. Nor do they allocate resources for a  staff of trained mediators to ensure fair play.

On Twitter, forms of algorithmic punishment can include temporary account lock-outs, “timeouts” whereby users are restricted to read-only mode, and shadow banning. When shadow banning occurs, the contributions of a problematic user become less prominent. The hope is that the user will feel unsatisfied by the lack of reactions and cease to engage in the problematic behavior.

Twitter has acknowledged that its efforts to detect bots and trolls could result in false positives. The company’s goal is to learn fast and make processes and tools smarter. In a May 2018 blog post, Twitter executives stated that they use machine learning in addition to human review processes to moderate conversations. “There are many new signals we’re taking in, most of which are not visible externally,” they wrote.

The company also drew an inherently subjective line in the sand. The authors wrote, “Some troll-like behavior is fun, good, and humorous. What we’re talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search.”

CEO Jack Dorsey has previously indicated that he is open to majorly adapting his platform as needed, which might even include the removal of the heart-shaped “like” button.

In addition to generating a return on investments and managing features, social media platforms have to contend with hate speech and conspiracy theories. In 2018, YouTube, Facebook, Stitcher, Apple, Spotify, and PayPal all made the decision to ban Alex Jones.

Throughout the early years of Trump’s presidency, a movement grew that sought to ban the president from Twitter, a channel that he habitually and famously uses. Activists claimed that the president’s tweets often violate the platform’s policies. Twitter concluded that access to the president’s account serves the public interest.

As conversations are conducted digitally, the very fabric of our arguments has changed. For instance, a video creator might cut to a copyrighted video clip in the same manner that an essayist would quote from a published book. The concept of fair use allows for excerpted media materials to be re-purposed in a limited way, provided that it’s serving the cause of commentary, criticism, or parody.

Social media platforms have the power to forge connections, elevate voices, and energize democracy. The ability to post text, images, and videos may seem increasingly quaint as VR and AR technologies start to overshadow things, but for now, they’re still the nature of our discourse. Diverse stakeholders have an interest in securing the openness, fairness, and effectiveness of social media platforms. But all of these qualities are, perhaps unavoidably, subjective and ambiguous.

Image Credit: Tero Vesalainen / Shutterstock.com

David Pring-Mill is a writer and filmmaker. His writing has appeared in Datanami, The National Interest, openDemocracy, and elsewhere.

Follow David: