Peer-to-peer file-sharing applications suffer from a fundamental problem of unfairness. Free-riders cause slower download times for others by contributing little or no upload bandwidth while consuming much download bandwidth. Previous attempts to address this fair bandwidth allocation problem suffer from slow peer discovery, inaccurate predictions of neighboring peers’ bandwidth allocations, underutilization of bandwidth, and complex parameter tuning. We present Fair Torrent, a new deficit-based distributed algorithm that accurately rewards peers in accordance with their contribution. A Fair Torrent peer simply uploads the next data block to a peer to whom it owes the most data as measured by a deficit counter. Fair Torrent is resilient to exploitation by free-riders and strategic peers, is simple to implement, requires no bandwidth over allocation, no prediction of peers’ rates, no centralized control, and no parameter tuning. We implemented Fair Torrent in a Bit Torrent client without modifications to the Bit Torrent protocol and evaluated its performance against other widely used Bit Torrent clients. Our results show that Fair Torrent provides up to two orders of magnitude better fairness, up to five times better download times for contributing peers, and 60%–100% better performance on average in live Bit Torrent swarms.
Previous studies that captured P2P overlay topologies with a crawler either rely on slow crawlers, which inevitably lead to significantly distorted snapshots of the overlay , or capture only a portion of the overlay which is likely to be biased (and non-representative) . These studies do not examine the accuracy of their captured snapshots and only conduct limited analysis of the overlay topology. More importantly ,these few studies are outdated (more than three years old), since P2P file sharing applications have significantly increased in size and incorporated several new topological Features over the past few years.
Accurately capturing the overlay topology of a large scale P2P network is challenging. A common approach is to use a topology crawler that progressively queries peers to determine their neighbors. The captured topology is a snapshot of the system as a graph, with the peers represented as vertices and the connections as edges. However, capturing accurate snapshots is inherently difficult for two reasons:
(i) Overlay topologies change as the crawler operates and
(ii) a non-negligible fraction of peers in each snapshot are not directly reachable by the crawler. When a crawler is slow relative to the rate of overlay change, the resulting snapshot will be significantly distorted. Furthermore, verifying the accuracy of a crawler’s snapshots is difficult due to the absence of authoritative reference snapshots. We introduce techniques for studying the accuracy of a crawler This work focuses on developing an accurate understanding.
||SQL Server 2000