Bandwidth usage of Wikimedia Commons increased by 50% due to AI bots
This big *Wikimedia Foundation* that runs Wikipedia and many other open-source projects has reported that **50% more media files** are being downloaded from their *Wikimedia Commons* website. But this increase is not due to common users, but it is happening due to **bots training AI models**.
Wikimedia wrote in its blog that their website is ready to handle the sudden increase in traffic, but **AI scrapers** are putting a lot of pressure on the system.
According to the report, **65% of the traffic taking the maximum server load is coming from bots**, while their share in the total pageview is only **35%**. The problem is that these bots often access files that are not very popular and hence have to be loaded from the **core data center**, which is increasing costs and problems.
Now Wikimedia’s **site team** has to work continuously to stop these bots so that common users do not face any problem. But this is not just a Wikimedia’s problem, **the entire (all over) Internet is struggling with this problem**.
AI companies are ignoring security settings like **robots.txt**, which is putting a lot of pressure on the Internet infrastructure. Tech companies like **Cloudflare** are finding new ways to deal with this, but it has now become a **cat and mouse game**.
If this trend continues like this, and worsens, then **digital publishers** will be forced to hide their websites behind **logins and paywalls**, which will make it very difficult for common people to get free information on the internet.
Also read : Big reveal of Nintendo Switch 2 today! Announcement of price and launch date possible 🎮