Categories
Education Serious

Content Filtering is (Mostly) Bad

In trying to be at the forefront of my K12 org’s cybersecurity effort, I’ve observed a lot of…stuff. Some of that…stuff…and how to make decisions around it has been interesting and eye-opening, because working for K12 presents some interesting and unique constraints that don’t exist in other organizations, the largest constraint being that students are children and you cannot fire them. Damn.

Some of what I’ve observed is in line with what you see in any org, and some of it is in line (or just par for the course) for what you see in very large orgs, which is what Arlington Public Schools (for whom I do not speak) is. A very big organization. Some of what I’ve seen is highly technical, but a lot of it, and I think this is generally true in cybersecurity regardless of org size or mission, is nuanced and related to how people behave and what we as technologists can do about that.

Which brings me to content filtering.

Let me back up. Handling a request for content filtering, which is “subject cannot go to this URL because reasons” is part of my duties as a K12 technology person. But let me be perfectly clear about this: content filtering is not cybersecurity, although well-intentioned people like to pretend that it is. Nowhere does “xyz is a distraction from learning/working” fit into the CIA triad. “But what about malware/phishing/etc websites?” These websites should be blocked – per a comprehensive security policy – because there’s a risk-based, technical reason to block them. But malware is malware. Phishing is phishing. Content…is internet content, or a thing you want to see on the internet that does not pose a technical risk to the organization.

If I haven’t convinced you, consider where your org’s content policy comes from in relation to its cybersecurity policy. I, as a security professional, may get a report of a website people are using to download torrents. I’m going to block that website because torrent sites are full of active threats, among other legal risks for allowing org members to seed torrents. But I wouldn’t decide to block, say, https://spotify.com, and if I did, it would likely go over poorly.* On the other hand, the CEO of a company might decide to block Netflix because it’s a “work distraction,” even though it doesn’t pose any cybersecurity threat. I’m engaging in cybersecurity, the CEO is engaging in content filtering.

This is actually how it works in my organization, and also for Fairfax County Public Schools, and probably others in this space – from FCPS:

“Only Instructional Services, through their curriculum committees, can determine which general categories to block.”

The cybersecurity team doesn’t make the content filtering policy, which is basically that we have a (reasonable) legal requirement to filter some content, plus an amalgam of loosely joined, well-intended opinions (in other words, a committee) that are melted down, cast, and cooled into a policy.

About that legal requirement. It mainly comes from the Children’s Internet Protection Act, or CIPA, which requires some content filtering on school (K12 only) and library computers in order for those organizations to get preferred pricing on equipment and internet service. It’s pretty specific, but the gist of it is 1: kids cannot look at “obscene material” (porn) on their computers (and the act very narrowly defines porn as something that must be visual, e.g. an image) and 2: schools must have a policy in place to teach kids about being safe on the internet. That’s it. It is a reasonable regulation.

There is another piece of legislation, the Children’s Online Privacy Protection Act (COPPA), that basically governs what information is collected from kids under 13. This rule is problematic in its enforcement and the FTC is “giving it another look.” Basically, online services are limited in what they can collect from kids under 13, which is problematic for them because it makes it hard to do targeted advertising. Most online services’ terms of service expressly forbid children under 13 from signing up for their services, but we all know how effective those terms of service are. There isn’t much of a relationship between CIPA and COPPA for content filtering, but the “13 years old” part is worth noting for reasons I’ll get into in a bit.

OK. So here we go. We have to filter on “adult content” – that seems OK – and we prevent threats to the org with a cybersecurity policy – that also seems OK. But if you’ve used your child’s device at all, you’ll know that way more content than this is filtered out. Why? And what’s the outcome of these policies compared to their intent?

When I was in school, a technologist could have written this post about how content filtering was simply ineffective. You’d try to block…some category of site, say adult content, and you’d end up blocking a bunch of safe sex education or whatever. I don’t think that’s true now, and the actual URL filtering we use (Palo Alto‘s) is pretty effective, and gets it right most of the time based on categories and dynamic updates. But this isn’t 2004, and the nature and method by which students access the internet and what opportunities exist for children on the internet have radically changed.

I didn’t get a cell phone until 2006, and it had 0 internet capability – it was “just a phone.” Now everyone has devices, and if they don’t have one, they will. I have opined at length in other online spaces about how all schools will be 1:1 device schools whether they want to go that route or not, not because of the pandemic (though that has certainly accelerated things), but because educational content providers (textbook publishers) have a lot to gain from simply providing a school district (or college, frankly) with a chromebook for each student included with a content subscription that can be updated at any time and integrated into an LMS. There is real value there as opposed to printing new editions of heavy textbooks year after year and reckoning with the used textbook market. But I digress, my point being, if your child doesn’t have a school-issued device yet, they probably will soon. From the practical perspective of access to the internet, content filtering is really the only difference between a school-issued device and a personal one.

I’m going to pick on my own school district for a minute, because I think this is a particularly heinous example – but my school district blocks YouTube for K-8 students on their devices. The stated reason for doing this is something like “because YouTube’s TOS doesn’t allow children under 13 to use its service” – which is nonsense, because if this standard were actually applied evenly (including to sites like vimeo, which is not blocked but has the same verbiage in its TOS) they would be blocking like 80% of the internet. Anyway.

What is “the thing” a student loses by not being able to go to YouTube, or Vimeo, or any other video site that a school district decides they are going to filter out? Well, it’s a bit of a trick question, and the answer is that most of them don’t lose anything, because they have cell phones. They have had YouTube the entire time.

If they can afford it.

I took a look at VDOE’s most recent spreadsheet for how many students in each school district are eligible for free and reduced lunch (FRL) In Northern Virginia, with Alexandria City Public Schools at 59%, Arlington Public Schools, Fairfax County Public Schools, and Stafford County Public Schools at about 30%, Prince William County Schools at 42%, Loudoun County Public Schools at 18%, and Falls Church City Public Schools at 7%. These numbers, combined with the mission of these districts’ 1:1 device policies essentially being an equity one – “to close the digital divide” – make it pretty safe to assume that a significant number of students don’t have access to technology either at home or in their pockets. For many of them, their school device is the only device they have.

So a school district says – and let me be clear here, that I am not only picking on my district, and that I do think these policies are made with good intentions – here is a device with an internet connection, and we are going to filter the content on it and celebrate our victory over “distractions” and “inappropriate content,” but we know, because we put the program in place, that large equity gaps exist in our communities and can deduce based on real data that these content policies are mostly affecting minorities and the underserved.

I find it hard to have an honest conversation about equity in schooling when we agree that “equity” means “equity of opportunity” when, by overreaching on content filtering, we are depriving the community of people we’re trying to help of the opportunities to grow and learn more. To be sure, there is harmful content on the internet, but determining a standard of “harmful” outside of the obvious (already discussed) is…not really our job as people who work in K12.

(I could write an entire post about how denying access to one harmful resource simply motivates the subject affected by that policy to find their content somewhere else, potentially more harmful, but I won’t even get into it here.)

Consider what capabilities a person with access to YouTube (for example) has compared to a person who doesn’t have access to YouTube, even in YouTube’s beleaguered “Restricted Mode” for K12. If you wanted to know the history of the Cold War, you can know it in minutes. If you want to learn about language, you can. If you want to see the news, that’s there too, and in multiple languages. If you want to become a content creator, you can record a video and publish it, even monetize it. YouTube is not some golden arch through which you will find salvation, and it has a lot of well-researched, well-documented problems, but let’s not make everyone who uses it a victim of “distraction” when the real victims are the people who, by no fault of their own, don’t have phones with data plans or personal devices at home and can’t get to the videos at all. Let’s not pretend that monitoring YouTube content isn’t a weight that literally every parent with an iPad has to bear once in a while in exchange for a few minutes of peace. And there are many K12 districts that block YouTube without considering how they are blocking a technically safe, scalable platform for learning and freedom of expression for lower income families.

Consider also the filtering of online messaging, which many school districts do. It’s a thornier area than YouTube, but consider again who is harmed by blocking a website like Discord. If you rely on Discord to keep in contact with your friends, and you don’t have a phone, you’re worse off than your peers who do. Making students aware of the pitfalls of online communication and providing them the skills they need to become good digital citizens is a harder ask than “well, let’s remove access for everyone” – but it’s the ask we should be answering. Every company is a software company. Every org is a digital org. Remote schooling is here to stay. These skills are more important than ever.

I’ll leave with one last example, and the example that inspired me to write all this up. Earlier this year I was asked via a parent to block a business simulation game called SimCompanies because it was “a distraction.” I took this and understood it to be in good faith, but I played the game a little bit, and stepped out of my lane on it by objecting to the decision to block it. It is a fascinating and pretty deep game from which one could learn a good deal about economics. It was advised that the block remain, and I consulted our firewall vendor to change the content category to the game from “business and economics” to “games.” They refused. I showed the game to some friends – one told me I could probably learn more about business and economics from playing this game than anything I could learn in school, and I agreed. Most school districts would pay top dollar for an educational resource disguised as a game, yet here it was, and the kids were using it, and it was free. But we have the opinion from one parent affecting the filtering policy of 27,000 students.

I think we can do better.

To quote one of my favorite lectures from the game designer of Loom, Brian Moriarty:

“If super power is what people really want, why not just give it to them? Awesome things don’t hold anything back. Awesome things are rich and generous. The treasure is right there.”

School districts need to revisit their content filtering policies. They need to do it through an equitable lens with the understanding that good access to the internet is still, inexplicably, a privilege reserved for those who can most afford it, and that devices for privileged families are ubiquitous while the devices given to them by their school districts are hamstrung by misguided policy. This notion that video websites, social media, online chat, and games are a constant source of distraction for students simply cannot be true when we have an achievement gap correlating with race and socioeconomic status. Districts need to do the hard thing by bringing in community leaders and experts to help train children on the benefits of good digital citizenship and data ownership. Give the kids the treasure. And let’s empower parents by innovating new ways of making school-issued devices compatible with home internet filters.

Reasonable and prudent filtering is OK and good. As obscenity goes in this country, you know it when you see it, but turning people into leaders starts by giving them access to the opportunities and tools they need to grow into the goodness of leadership, not taking them away because we said so.

*I bring up this example because this actually happened to me. Back in the day, Spotify used P2P technology in its application (I believe it no longer does) – the security team on the org thought it’d be a good idea to block Spotify. It got to the CEO, the CEO got upset and overruled us.

Leave a Reply

Your email address will not be published. Required fields are marked *