VB.NET and its broken data types

VB.NET is starting to get on my nerves.

I have been coding in BASIC since I was 13 years old. The language and I just clicked; I felt as I was just simply asking the computer, in plain English, what to do. It just felt right.

At about that same time I also learned C, Java and (68000/80×86) assembler; but it just wasn’t the same.
Sure, anything but BASIC had the “execution speed” advantage, but early on I realized that the time spent in the “development cycle” was far more important than any other advantages that other languages could provide.

Of course, this is a highly disputable argument but at the time, it made quite a lot of sense.

So I’ve been coding in BASIC (regardless of the compiler/IDE) for most of my life and I have also been heavily criticized for it, always resorting to the common misconceptions that BASIC is slow, RAM hungry, the code looks like a spaghetti, using WEND, the POKEs and PEEKs, etc, etc…

I love challenges and coding in BASIC can be a huge one. But, so far, I think I’ve been able to do anything I’ve wanted in pure BASIC… until now.

I have coded several emulators and compilers in BASIC and I have never encountered any problems directly caused by the language itself… until now.

For the past few years (in my spare time) I’ve been working on, what should have been, a quite simple program: an 80×86 emulator.
Unfortunately, this has turned out to be a complete nightmare!

Quite early in the development of the emulator’s most basic features I realized I was going to have a problem properly handling 8 and 16 bit operations. And I was quite right.

Although the emulation code hasn’t changed that much since its initial implementation, I’ve been implementing helper functions to allow me to properly handle such operations.

Just look at the “December 30, 2012” entry, for example.
In the article, I explain that a simple data type change solved “the” problem, but  at the time I didn’t realize that this would introduce other issues; and all because of the fucked up way VB.NET handles data types.

Consider this c# code:

What result should we expect?
Well, that is simple: if “x” is an unsigned 16bit variable, then its maximum possible value (65535) plus 1 (one) should give us 0 (zero); and that is exactly what happens if you execute that code:

So, let’s try a VB.NET version of the code and let’s see what happens.
Here’s the code:

And here’s the result:


So what the hell happened???
Well, BASIC happened…

In c#, just as in c/c++, Java and many other languages, the value of “x” is rolled over to provide the correct result. In VB.NET, we simply get a stupid exception, informing us of an overflow.

The implications of this behavior are a major fact that should be considered by anyone coding in BASIC. Including myself.

Why is the VB.NET compiler throwing an exception here?
Why cannot it behave as the c# compiler?
Am I missing something? Is there, perhaps, a way to change the compiler’s default (and quite idiotic) behavior?
Can you even imagine the leaps and bounds I’ve been through trying to compensate for such behavior in the 80×86 emulator?

Oh… and in case you were wondering, yes VB6 has the exact same behavior:


Will this ever be fixed?
I honestly doubt it… but one can only hope.

So why is this happening? Why are these two compilers behaving differently?
Well, let’s take a look at the MSIL code.
This is what the VB.NET compiler produced:

And this is the code produced by the c# compiler:

Notice any differences?
No? Then take a closer look at the “add” (IL_0009) and “conv” (IL_000a) opcodes.

I know absolutely nothing about MSIL, but the additional overflow checks on those opcodes is a deal breaker, making VB.NET behave like a broken language.

Is there a reason behind this behavior?
I guess so… otherwise, why is Mono producing the exact same results?


Or is the Mono runtime simply emulating .NET’s stupidity?

The source code for the programs used to test this behavior can be found at GitHub.