HTTP vs IPFS: is Peer-to-Peer Sharing the Future of the Web?

Share this article

Thanks to Adrian Sandu, Panayiotis Velisarakos and Tim Severien for kindly helping to peer review this article.

The Inter-Planetary File System (IPFS) is a revolutionary model that could change the way we use the Internet. Unlike the typical server-client model we’re accustomed to, IPFS is something more like BitTorrent. Does that grab your attention? Then read on!

IPFS logo

The Problems With Today’s Web

The Hypertext Transfer Protocol (HTTP) is the backbone of the World Wide Web. We use HTTP to access most of the Internet. Any website we visit, typically, is via HTTP. It’s essentially a server–client mentality, where our computer sends requests to the server hosting a website, and the server sends back responses.

HTTP, though, lends itself naturally to a narrower and narrower subset of services. It’s natural for large services to emerge as the sort of structure of a large portion of the Web, but that sort of centralized environment can be dangerous. If any of the large hosting companies and/or providers of services – such as Google, Microsoft, Amazon, Dropbox, Rackspace, and the like – were to suddenly falter, the results to the Web would be disastrous in the short term. And herein lies the problem (at least one of them).

In addition to the natural process of centralization that’s occurring, there’s also a troubling reliability issue with today’s web. Most websites and applications are hosted by a single server, or by a redundant array of load balanced servers, or whatever the case may be. If the owner of those servers, or the datacenter’s management, or even a natural disaster, takes those machines out, will the application continue to run? Backups and redundancy can be put into effect by organizations with enough resources, but even those can’t stop a company which simply decides to take down their website or application.

Reliance on Hosts

If and when the server hosting a site goes down, we’re now reliant on the hosting company to have fail safes, redundant systems, backups, etc. They must recognize that your service is out, and assist you in restoring it. If it’s a hardware issue, they should have alternative systems they can port your setup onto. They should have backup networking systems, and they should be keeping at least a backup of your data, whether they advertise it or not, in the event of a data loss situation that is their fault.

What if they don’t?

Reliance on Site Administrators

Now the impetus falls on site administrators to keep a service going and data backed up. If you’ve ever been an avid user of an application that was suddenly removed, you know this feeling.

Movements to open source help tremendously, allowing multiple forks of a project to take off, and allowing things that are more static – like documentation – to be preserved in multiple locations and in multiple formats. But the fact remains that the majority of the Web is controlled by people like you or me, maintaining servers.

Some freelance developers even manage the hosting and maintenance of some of their smaller clients’ sites. What if they forget to pay their bill? Get angry with a client and lock them out of their site? Get hit by a truck? Yes, the site owner may have legal options in any of these cases, but will that help you while your site is completely inaccessible?

Reliance on Users

Yet one more problem is that of the users of any web application. Content often must have a critical mass of users or visitors to even merit hosting. Often low-traffic applications or static sites are shuttered simply because they aren’t cost effective to run. Additionally, the reverse problem is also very real. Users of the modern Internet are still clustering together. Facebook – which is a single social network – has somewhere in the ballpark of one out of every five persons on the face of the Earth reported as active users. There are countless businesses who entirely depend upon Facebook to exist. What if it shut down tomorrow?

Of course, Facebook won’t shut down tomorrow, and neither will most of the apps you love and use. But some may. And the more users that have flocked to them before that happens, the more damage that will cause to everyday workflows, or even to personal and business finances, depending on what kind of applications you use and for what.

The Answer is IPFS

So, you may be asking, how does IPFS solve these problems? IPFS is a relatively new attempt to solve some of these issues using distributed file systems. The IPFS project is still fairly low on documentation, and is perhaps the first of many different solutions.

IPFS Nodes

First and foremost, you should understand a few things about IPFS. IPFS is decentralized. Without a typical server providing web pages for every client that arrives at the website’s domain, a different infrastructure must be imagined. Every machine running IPFS would be a node as part of a swarm.

Consider the way torrents currently work. You choose a file to download, and when you use a torrent application to do so, you’re essentially sending out a request to all of the computers attached to the same torrent network as you, and if any of them have the file you’re requesting, and are able to upload at the moment, they begin sending pieces of it to your computer. That’s a condensed version.

So how do IPFS nodes work? Each machine that’s running IPFS is able to select what files they want their node to serve.

Hashing and IPNS

Every file that exists on IPFS would have a unique hash to represent it, and any minute change would result in a new hash being generated. These hashes are how content can be viewed. A client queries the system for a hash, and any node that has that content available can serve it to peers. The “swarm” provides a torrent-like experience, wherein peers are capable of serving each other content.

This system will allow content to be served quickly and accurately to clients, regardless of their proximity to the original host of the content. Additionally, because hashes are employed, both ends of the exchange can be checked for correct content, as a single bit out of place would result in a different hash.

The Inter-Planetary Naming System (IPNS) can be used to assign a name to mutable (changeable) content, so that your node publishes a piece of content, has a name attached to it, and then is able to republish changes with the same name. This, of course, could result in loss of available content, so IPNS entities, according to the developers, may some day function more like a Git commit log, allowing a client to iterate back through versions of the published content.

Advantages of Decentralization

So, you’ve heard all about centralization and decentralization. But what are the practical benefits of the fact that IPFS is decentralized?

Reliability and Persistence
The content being served on the IPFS network is going to be around, essentially, forever, if people want it to be. There’s not any single weak link, server, or failing point. With larger files, there may be a benefit to having multiple peers as options for your IPFS to choose from to acquire the file. But the real benefit comes from having those multiple options to start with. If one node hosting it goes down, there will be others.

Secured Against DDoS-style Attacks
Simply by its nature, distributed peer to peer content cannot be affected by “Direct Denial of Service” style attacks. These attacks are primarily concerned with bombarding host servers to bring down websites or services. However, if the same content is being served to you from multiple peers, an effective DDoS attack would have to find and target all of them.

Previously Viewed Content Available Offline
With the caching system in place with IPFS, it’s entirely possible that quite a lot of your regularly viewed content would be available offline by default. Any dynamic content might not be up to date, of course, but previously viewed static content resources could be at your fingertips whether you were in range of your Wi-Fi or not.

How Would Things Change?

With IPFS as a major player, things would definitely change. Although IPNS nodes can be mapped to HTTP addresses currently, they would not necessarily need to be forever. Web browsers might change, or be removed entirely. More likely, given the transition, you’d simply begin using multiple protocols to access content (instead of typing http:// you might end up with several other protocols available in major browsers). These browsers would also need to be equipped with an automatic way to replace any locally cached content, if the node the browser attempts to contact has content that has been altered and is presenting a new hash.

Browsers, or other clients, might be the only necessary software. Remember that IPFS is peer to peer, so your IPFS installation is simply reaching out to locate others.

You also may wonder what happens with websites serving dynamic content. The answer here is far less clear. While updating static content and republishing to IPFS might not be such a challenge, dynamic, database-driven websites will be significantly more complicated. The challenge ahead will be for developers and proponents of the system to create not only viable, but also practical alternatives to cover these use cases, as a huge portion of the Web today is driven by dynamic database content. IPNS provides some potential solutions here, as do other services that are being developed, but a production-ready solution is yet to come.

The Future with IPFS

IPFS is definitely not a polished, well-oiled machine yet. It’s more of a fascinating prototype of what the Web could look like in coming years. The more people who test, contribute, and work to improve it, the greater chance it will have to change the way we serve content on the Internet as a whole. So get involved!

Download one of the prebuilt binaries, or source files, here , or check out the documentation, to get a little more information on the subject, and get started today!

Frequently Asked Questions (FAQs) about HTTP vs IPFS

What is the main difference between HTTP and IPFS?

HTTP (HyperText Transfer Protocol) and IPFS (InterPlanetary File System) are both protocols used for transferring data over the internet. However, they differ significantly in their approach. HTTP is a centralized system where data is stored on servers and accessed via URLs. This means if a server goes down, the data becomes inaccessible. On the other hand, IPFS is a decentralized system where data is stored across multiple nodes in a network. This ensures data remains accessible even if some nodes go offline. IPFS also provides more efficient data transfer as it retrieves data from the nearest node, reducing latency and bandwidth usage.

Is IPFS more secure than HTTP?

IPFS has several features that make it potentially more secure than HTTP. Firstly, IPFS uses content addressing rather than location addressing. This means data integrity can be verified as each file and all the blocks within it are given a unique fingerprint called a cryptographic hash. Secondly, IPFS supports peer-to-peer data distribution, which can be more resilient to attacks as there is no single point of failure. However, like any system, IPFS’s security also depends on how it’s used and implemented.

How does IPFS impact web browsing?

IPFS can significantly improve web browsing experience. By retrieving data from the nearest node, IPFS can speed up page load times, especially for users in areas with slower internet connections. Additionally, IPFS’s decentralized nature means that websites and web content can remain accessible even if the original publisher’s server goes offline. This could potentially revolutionize how we access and share information on the web.

Can IPFS replace HTTP?

While IPFS has many advantages over HTTP, it’s not likely to completely replace HTTP in the near future. Many existing web applications and services are built around HTTP, and transitioning to IPFS would require significant changes. However, IPFS could be used alongside HTTP to improve data integrity, efficiency, and resilience. Some web browsers, like Brave, have already started integrating IPFS support.

What are the challenges in adopting IPFS?

While IPFS offers many benefits, it also comes with its own set of challenges. These include issues related to data privacy, as all files added to IPFS are publicly available by default. There are also concerns about data persistence, as data can be lost if no node in the network is storing it. Additionally, the adoption of IPFS requires changes to existing web infrastructure and applications, which can be a significant hurdle.

How does IPFS handle data redundancy?

IPFS handles data redundancy through its decentralized nature. When a file is added to the IPFS network, it’s broken down into smaller blocks, each with a unique hash. These blocks are then distributed across multiple nodes in the network. This means that even if some nodes go offline, the data can still be retrieved from other nodes that have the same blocks.

How does IPFS contribute to the future of the web?

IPFS could play a significant role in the future of the web by making it more decentralized, efficient, and resilient. It could enable new types of web applications and services that are not possible with the current HTTP-based web. For example, IPFS could be used to build a distributed web archive, a peer-to-peer streaming service, or a decentralized social network.

Is IPFS faster than HTTP?

IPFS can potentially be faster than HTTP, especially for users in areas with slower internet connections. This is because IPFS retrieves data from the nearest node, reducing latency and bandwidth usage. However, the actual speed can vary depending on various factors, such as the number of nodes in the network and their geographical distribution.

How does IPFS handle large files?

IPFS handles large files by breaking them down into smaller blocks. Each block is given a unique hash, which is used to retrieve the block from the network. This allows IPFS to handle large files more efficiently than HTTP, as it can retrieve different blocks from different nodes simultaneously.

Can I use IPFS with my existing web applications?

Yes, IPFS can be used with existing web applications. However, it may require some changes to how your application handles data storage and retrieval. There are also libraries and tools available that can help integrate IPFS with existing web technologies.

Jeff SmithJeff Smith
View Author

Jeff works for a startup as a technical writer, does contract writing and web development, and loves tinkering with new projects and ideas. In addition to being glued to a computer for a good part of his day, Jeff is also a husband, father, tech nerd, book nerd, and gamer.

file sharinghttpinternetIPFSRalphM
Share this article
Read Next
Get the freshest news and resources for developers, designers and digital creators in your inbox each week