That was the case last year. I just attended the MS Ignite conference and took a number of security sessions. One particular session outlined hack-proofing your client machines, and showed a tool that foiled detection. The video is at
https://myignite.microsoft.com/#/videos/f0a03d6a-b89f-e411-b87f-00155d5066d7 at the 1 hour 12 minute mark. In the demo, which they performed live in front of me, they showed how a virus can be modified to be something completely different. They uploaded a virus and scanned it against virustotals (a website where you can upload a file and run it against over 50 of the top anti-virus out there and get results instantly), and got a number of hits. Then, they performed a little modification against the .exe file, and suddenly it was no longer visible to any AV software, because it was completely different. HB isn't a virus, but it could possibly be "detected" as badware if Blizzard is acting like an antivirus looking for something known. If this is the case, this might be a valid mitigation tool.
While the tool in the video may be for "internal use only", as shown in the video, there may be a way that Bossland can provide compile on demand (or from a pool of unique executables compiled ahead of time) that would make each HB executable unique and therefore will not have a signature that is detected. I'm still on break from my 6-month ban, but I would like to look at doing something like this on my own copy of HB before I return. While there are a ton of nay-sayers that say it isn't necessary, stacking protections on top of protections when the actual detection method is unknown sounds like a solid mitigation strategy.
I know this is a little beyond what mscanice was referencing, but it seems like it would be effective against some detection techniques, especially process and executable scanners.
--Edited to clarify my point and hopefully prevent needless trolling.
* Source referenced - check
* Non-rehashed topic - check