The internet is home to all facets of human experience – both good and bad. As connectivity has increased, so too has the proliferation of illegal and dangerous content online. A 2019 survey by the Internet Watch Foundation found that child sexual abuse material increased by 98% between 2014-2018 on major platforms. Terrorist propaganda and recruitment floods social networks, with over 90,000 pieces of ISIS content detected in a single 6 month period according to the Middle East Research Institute. The World Health Organization estimates that over 50% of pharmaceuticals purchased online are counterfeit.
These grim statistics underscore the importance of detecting and removing illegal content quickly in order to protect society. But doing so presents immense technological challenges. A staggering 4.5 billion internet users generate petabytes of data every day. Reviewing all this content manually is impossible.
In recent years, artificial intelligence (AI) has shown promising results in automating the detection of online illicit material. Machine learning models can process reams of data at high accuracy in seconds. This comprehensive guide will explore the current state of using AI to detect illegal content online.
Categories of Illegal Online Content
Let‘s first outline the major types of illegal and prohibited internet content and why limiting their spread is crucial:
Child Sexual Abuse Material
Unfortunately, pedophiles have taken advantage of the internet to produce and distribute photographs and videos depicting the sexual abuse of children. In 2021, the Internet Watch Foundation reported finding over 20 million images and videos with child sexual abuse content concentrated on the dark web. But it also spreads across mainstream platforms like Facebook, Snapchat and Twitter who work aggressively to detect and remove it.
This content graphically documents victims being sexually assaulted and its proliferation directly enables further child exploitation. Plus, viewing and distribution carries heavy criminal penalties in most nations.
Terrorist Propaganda and Recruitment
Terror groups like ISIS leverage social networks and encrypted messaging apps to spread extremist propaganda aimed at recruiting vulnerable individuals. A study by George Washington University‘s Program on Extremism found that 56% of American adults arrested for supporting ISIS between March 2014-2018 consumed the group‘s social media propaganda.
Allowing terror content to spread online directly radicalizes individuals to commit acts of violence. Tech companies walk a fine line between enforcing bans while respecting free expression.
Pirated and Counterfeit Goods
From Hollywood movies to pharma drugs, cheap counterfeit copies abound online. Research group NetNames estimated global lost revenue from pirated digital content at $63.4 billion in 2019. Counterfeit pharmaceutical sales are expected to top $75 billion globally by 2024 according to Pharmaceutical Commerce.
While pirated movies and music may seem harmless, it causes massive losses for industries. More dangerously, fake drugs sold online leave customers vulnerable to unsafe products.
Hate Speech and Disinformation
Social networks are wrestling with the spread of disinformation and dangerous speech that can incite real world violence. In 2018, United Nations human rights experts called on Facebook to act after its platform was used to accelerate genocide against the Rohingya people in Myanmar.
Tech companies must balance enforcing content moderation policies while respecting free expression. This issue requires understanding nuanced local social contexts.
Detecting and enforcing against these different categories of illegal content presents immense technological and ethical challenges. Let‘s explore some of the AI techniques being applied today.
AI Techniques for Detecting Illegal Content
Recent breakthroughs in artificial intelligence through machine learning and deep learning have enabled major progress in automatically detecting different types of illegal and prohibited online content:
Image Analysis for Identifying CSAM and Violent Extremism
Computer vision techniques are being applied to analyze and categorize disturbing visual content like child abuse material and violent extremist propaganda. Some approaches include:
- Perceptual image hashing – Images are encoded into unique digital fingerprints using algorithms like Microsoft‘s PhotoDNA. Instead of analyzing image contents directly, matches can be found by comparing image hashes against known illegal hashes. This allows identifying duplications of the same image without needing to store the offending media.
- Object detection – State of the art deep learning models like Faster R-CNN can accurately identify people, objects, text, symbols and other features within images. This provides vital context clues to judge whether the image contains illegal content. For example, detecting nudity, children‘s faces, or extremist slogans on flags.
- Skin detection – Machine learning models can be trained to identify human skin tones and textures. This helps pinpoint inappropriate sexual content.
Here is sample Python code for an image classifier to detect child exploitation content:
import torch
import torchvision
# Load pretrained ResNet model
model = torchvision.models.resnet18(pretrained=True)
# Replace classifier with custom output layer
model.fc = torch.nn.Linear(512, 2)
# Train model on example dataset
optimizer = torch.optim.Adam(model.parameters())
loss_fn = torch.nn.CrossEntropyLoss()
for images, labels in dataset:
# Forward pass
predictions = model(images)
# Calculate loss
loss = loss_fn(predictions, labels)
# Backpropagate
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Test model on new images
inputs = preprocess(test_images)
predictions = model(inputs)
_, predicted = torch.max(predictions.data, 1)
This gives a brief example of developing a deep learning classifier for flagging illegal content in images. The same principles are applied by companies like Google, Meta, Microsoft to scan uploaded images in real-time.
Natural Language Processing for Text Analysis
For detecting illegal activities discussed in text, blogs, messaging apps, NLP techniques are employed including:
- Sentiment analysis – Classifying the tone and emotional sentiment of written text using natural language models. This can identify hostile, threatening, or radicalized language.
- Keyword detection – Scanning for terms, slogans, names associated with banned groups, drugs, criminal activity. This can be combined with sentiment analysis for greater accuracy.
- Language models – Large pretrained NLP models like OpenAI‘s GPT-3 and Google‘s BERT are fine-tuned on labeled data to accurately classify text snippets as illegal or innocuous.
Here is sample Python code employing the HuggingFace BERT NLP model for hate speech detection:
from transformers import BertTokenizer, BertForSequenceClassification
tokenizer = BertTokenizer.from_pretrained(‘bert-base-uncased‘)
model = BertForSequenceClassification.from_pretrained("./bert_model")
text = "I hate green people! They should be banned..."
encoded_input = tokenizer(text, return_tensors=‘pt‘)
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
predicted_class_id = np.argmax(scores)
print(predicted_class_id)
# Outputs 1 (Hate speech class)
By fine-tuning powerful NLP models on labeled datasets, they can accurately classify text as illegal or permitted with high accuracy.
Network Analysis for Uncovering Organized Crime
Analyzing connections and patterns in online user networks helps uncover organized criminal activities like trafficking rings and extremist groups:
- Graph analysis – Mapping relationships between users and groups to identify tightly knit clusters. This reveals coordinated networks engaged in illegal activities.
- Anomaly detection – Identifying abnormal behavior that deviates from normal activity patterns can surface criminal operations.
- Recommendation analysis – Finding interconnected clusters that engage with prohibited content can take down networks sharing child abuse media or pirated intellectual property.
Facebook‘s Sapienz system uses graph analysis on their social networks to uncover child exploitation rings and terrorist cells plotting attacks. In 2017, Sapienz helped dismantle a 30,000 member child exploitation group on WhatsApp through network insights.
Deep Dive – Developing an Image Classification Model
To provide more detail, let‘s walk through an end-to-end example of developing a machine learning model for detecting child sexual abuse material in images.
Data Collection and Annotation
- Obtain a large dataset of sample illegal images from nonprofits. Thorn curates datasets of CSAM content to help enhance detection capabilities.
- Supplement the illegal data with a variety of legal images like animals, landscapes, everyday objects.
- Upload images to labeling platform and distribute evenly between human annotators with expertise.
- Annotators classify each image as either "Child Sexual Abuse Material" or "Legal".
- Analyze inter-annotator agreement between labelers to measure consistency.
- Preprocess images – convert to tensors, normalize, resize, etc.
Model Development and Training
- Start with a pretrained ResNet CNN architecture tuned for object recognition. This provides a strong baseline.
- Replace the default classifier with an output layer of 2 nodes for binary classification – CSAM vs Legal.
- Compile model in PyTorch and train on annotated datasets monitoring loss, accuracy, ROC AUC, etc.
- Tune hyperparameters like learning rate, regularization, number of epochs. Use early stopping to prevent overfitting.
- Employ aggressive data augmentation – random crops, flips, compression artifacts, noise – to improve robustness.
- Iterate training until optimal validation accuracy is reached.
Testing and Production Deployment
- Test model on holdout datasets and measure precision, recall and F1-score.
- Analyze false positives and negatives using confusion matrix. Adjust thresholds and retrain as needed.
- Run model against new unlabeled images and ensure accurate binary predictions.
- Start with narrow pilot deployment to catch issues before full production rollout.
- Implement human-in-the-loop review for lower confidence predictions to minimize false positives.
- Continue monitoring and tuning model in production with new labeled data.
Walking through a sample workflow provides deeper insight into the complexity of developing real-world AI systems for illegal content detection. The same overall steps are followed by technology companies training computer vision models at massive scales using GPU clusters.
Addressing Challenges in AI Content Detection
While AI promises improved accuracy and efficiency in tackling online illicit content, there are still key challenges:
- False positives – Incorrectly flagging benign content results in censorship of legal speech. Mitigations include employing ensemble models and always allowing human review before final judgments.
- Adversarial content – Images and text can be carefully manipulated to evade AI detection. Continuously retraining models on new adversarial examples improves robustness against these evasion tactics.
- Novel content – As illegal content evolves with new code words, slogans, and logos, models need regular retraining to adapt. Maintaining up-to-date training datasets is critical.
- Limited context – Unlike humans, algorithms cannot consider cultural contexts and nuances in language that differentiate illegal from permitted speech. Ongoing model tuning on representative data enhances understanding.
- Encryption – Widespread use of encryption limits visibility into content shared through encrypted apps and networks. However, network analysis can still uncover behavioral signals indicative of illegal activity.
To address these limitations, experts recommend an ensemble approach combining multiple AI models with ultimate human review before executing actions like account suspensions. Ongoing monitoring and adaptation is crucial as illegal content continuously evolves to avoid detection.
Implementing AI Responsibly Against Online Threats
AI-powered capabilities offer immense potential to counter illegal activities at web scale. However, these technologies raise legitimate concerns around privacy, bias, and overreach if applied irresponsibly.
Here are some best practices for implementing AI safely and ethically:
- Start with high consensus illegal use cases like child exploitation to establish viability before expanding to murkier areas.
- Understand and comply with all relevant laws and regulations in regions of operation. Engage policy makers.
- Be transparent about use of AI, allowing external researchers to audit systems. Publish accuracy metrics.
- Enable human review prior to suppressing content or suspending accounts when possible.
- Continuously monitor model decisions for unintended bias. Collect representative training data.
- Implement stringent privacy safeguards around any user data leveraged for model development.
Adhering to these principles and allowing public scrutiny enables developing AI that enhances safety while protecting civil liberties. External oversight and collaboration between companies, academics, governments and nonprofits is critical.
The Outlook for AI Against Online Crime
In recent years, AI has rapidly transitioned from academic promise to real-world deployment in combating online harms. Current techniques have major limitations, but are improving constantly thanks to exponential gains in computing power and availability of training data. As algorithms become more advanced in the coming years, AI is poised to revolutionize the enforcement of policies around illegal content at tremendous speed and scale.
Through responsible development and application, AI can significantly limit the spread of dangerous online material that threatens both public safety and vulnerable groups like children. However, maximizing these benefits requires sustained engagement between companies, regulators, and external researchers to ensure transparency and accountability. With collaborative and ethically grounded efforts, these emerging technologies can profoundly enhance our capability to detect illegal activities without compromising human rights and liberties.