Windows now supports delivering updates via Peer to Peer (P2P). Let’s see what that means. If you have a company with let’s say 100 computers and low bandwidth, if very few of these computers managed to get their updates, they will then pass on the updates to the rest of the computers “near them”. At least in theory. We will see later what’s wrong in this scenario.
From a technical perspective we finally see p2p technology at work on large scale. That’s a good thing. I would really like to see more of this technology working. Imagine the possibilities of a P2P social network and what would it be capable of if correctly implemented; or of a video streaming system, even decentralized online gaming.
But let’s get back to the current issue and at our example. Let’s say 3 of these computers got their updates and then they are passing the updates to near-by computers. So far so good, the rest won’t need to consume internet bandwidth to do so. But his assumes those computers are actually near each other, which may not be every time correct. Windows detects this via IP geolocation, which is a mistake. If you have multiple company sites world wide (let’s say a few sites in China, Europe and US just to make things simple) you may use special connectivity technologies (like MPLS, SDWAN or others) that support tunnel links. You may end up with all computers exiting towards internet via the same main node (your datacenter with the best WAN link). That means all computers in China, Europe and US will exit the internet through the fastest node, let’s say Europe. Now, what Windows update will do is detect that all computers exit the internet in the same area, assume that they are neighbors and spread the updates between them. So the computers in China will exchange updates with the ones in Europe and with the ones in the US. That will overkill your bandwidth (WAN and tunnel) and cause general network slowness, possibly also lots of denies due to security policies and lots of corresponding logs.
So in the above scenario it’s best that you have on-site update servers that handle updates and turn off this feature; otherwise you’ll be slowed down a lot and you don’t know why.
That was one example of scenario where things will be worse. But that’s not the higher problem. The most important issue is the security side of things. Imagine an exploit done within a network that will compromise a computer with an attacker getting that specific computer to update the rest. You have the whole network under rogue control without the possibility of countering it in an efficient manner. Of course, there are mechanisms in place to prevent a thing like this, but don’t count on them to actually be realistic in practice. Microsoft had some epic update failures in the last 12 months. Just to name 2 of them: the printer nightmare patch that made quite many print servers fail to provide properly printing services to workstations and further, preventing normal adding of printers from print server to workstations and the very recent even worse one where KB5009624 (for Windows Server 2012R2 and a few others for more recent OS versions) practically broke Hyper-V causing none of the VM guests to be able to start. So imagine you schedule a few minutes of downtime to update a HV host and when update is done (usually these are done automatically at night when most people don’t work) you find yourself with no server being able to start and no quick obvious solution to fix this. The point is, Microsoft patching remains of poor quality, as it always has been and it’s not to be trusted.
That said, even worst things can happen. Think about a high end top level compromise. Anything can be expected. Since updates can run under TI (Trusted Installer) in case of compromise you will get a full Global assimilation of any Windows-based device on your network. And “bad” guys aside, this also gives Microsoft unprecedented control over any existing system with their OS installed. This is also not to be desired no matter you may or may not think now.