Ctor conflicts

Perhaps the content of this post is trivial and widely known(?), but I just spent some time fixing a bug related to the following C++ behavior.

Let’s take a look at this code snippet:

// main.cpp ------------------------------
 
#include <stdio.h>
 
void bar();
void foo();
 
int main(int argc, const char *argv[])
{
    bar();
    foo();
    return 0;
}
 
// a.cpp ---------------------------------
 
#include <stdio.h>
 
struct A
{
    A() { printf("apple\n"); }
};
 
void bar()
{
    new A;
}
 
// b.cpp ---------------------------------
 
#include <stdio.h>
 
struct A
{
    A() { printf("orange\n"); }
};
 
void foo()
{
    new A;
}

The output of the code above is:

apple
apple

Whether we compile it with VC++ or g++, the result is the same.

The problem is that although the struct or class is declared locally the name of the constructor is considered a global symbol. So while the allocation size of the struct or class is correct, the constructor being invoked is always the first one encountered by the compiler, which in this case is the one which prints ‘apple’.

The problem here is that the compiler doesn’t warn the user in any way that the wrong constructor is being called and in a large project with hundreds of files it may very well be that two constructors collide.

Since namespaces are part of the name of the symbol, the code above can be fixed by adding a namespace:

namespace N
{
struct A
{
    A() { printf("orange\n"); }
};
}
using namespace N;
 
void foo()
{
    new A;
}

Now the correct constructor will be called.

I wrote a small (dumb) Python script to detect possible ctor conflicts. It just looks for struct or class declarations and reports duplicate symbol names. It’s far from perfect.

# ctor_conflicts.py
import os, sys, re
 
source_extensions = ["h", "hxx", "hpp", "cpp", "cxx"]
 
symbols = { }
psym = re.compile("(typedef\\s+)?(struct|class)\\s+([a-zA-Z_][a-zA-Z0-9_]*)(\\s+)?([{])?")
 
def processSourceFile(fname):
    with open(fname) as f:
        content = f.readlines()
    n = len(content)
    i = 0
    while i < n:
        m = psym.search(content[i])
        i += 1
        if m == None:
            continue
        symname = m.group(3)
        # exclude some recurring symbols in different projects
        if symname == "Dialog" or symname == "MainWindow":
            continue
        # make sure a bracket is present
        if m.group(5) == None and (i >= n or content[i].startswith("{") == False):
            continue
        loc = fname + ":" + str(i)
        if symname in symbols:
            # found a possible collision
            print("Possible collision of '" + symname + "' in:")
            print(symbols[symname])
            print(loc)
            print("")
        else:
            symbols[symname] = loc
 
def walkFiles(path):
    for root, dirs, files in os.walk(path):
        for f in files:
            # skip SWIG wrappers
            if f.find("PyWrapWin") != -1:
                continue
            # skip Qt ui files
            if f.startswith("ui_"):
                continue
            fname = root + os.sep + f
            ext = os.path.splitext(fname)[1]
            if ext != None and len(ext) > 1 and ext[1:] in source_extensions:
                processSourceFile(fname)
 
 
if __name__ == '__main__':
    nargs = len(sys.argv)
    if nargs < 2:
        path = os.getcwd()
    else:
        path = sys.argv[1]
    walkFiles(path)

In my opinion this could be handled better on the compiler side, at least by giving a warning.

ADDENDUM: Myria ‏(@Myriachan) explained the compiler internals on this one on twitter:

I’m just surprised that it doesn’t cause a “duplicate symbol” linker error. Symbol flagged “weak” from being inline, maybe? […] Member functions defined inside classes like that are automatically “inline” by C++ standard. […] The “inline” keyword has two meanings: hint to compiler that inlining machine code may be wise, and making symbol weak. […] Regardless of whether the compiler chooses to inline machine code within calling functions, the weak symbol part still applies. […] It is as if all inline functions (including functions defined inside classes) have __declspec(selectany) on them, in MSVC terms. […] Without this behavior, if you ever had a class in a header with functions defined, the compiler would either have to always inline the machine code, or you’d have to use #ifdef nonsense to avoid more than one .cpp defining the function.

The explanation is the correct one. And yes, if we define the ctor outside of the class the compiler does generate an error.

The logic mismatch here is that local structures in C do exist, local ctors in C++ don’t. So, the correct struct is allocated but the wrong ctor is being called. Also, while the symbol is weak for the reasons explained by Myria, the compiler could still give an error if the ctor code doesn’t match across files.

So the rule here could be: if you have local classes, avoid defining the ctor inside the class. If you already have a conflict as I did and don’t want to change the code, you can fix it with a namespace as shown above.

Posted in Internals, Programming | Tagged , , , | 2 Comments

MUI files under the hood

Have you ever copied after Vista a system file like notepad.exe onto the desktop and tried to execute it? Have you ever tried after Vista to modify the resources of a system file like regedit.exe? It’s most likely that neither of the two was a successful operation.

This will be very brief because the topic is very limited and because of my lack of time: bear with me. :)

If you try to copy, for instance, notepad.exe onto the desktop and run it in a debugger you will notice that it fails in its initialization routine when trying to load its accelerators. You take a look at the HINSTANCE passed to LoadAccelerators and notice that it’s NULL. You open notepad.exe in a resource viewer and notice that it doesn’t contain accelerator resources. Thus, you realize that the global instance is associated to some external resource as well. Go back to the system folder where you took the system executable and you’ll notice language directories such as “en-US”. Just copy the one which identifies the language of your system to the same directory of notepad.exe. You’ll notice that now notepad.exe runs correctly.

Vista introduced the separation between binary and language dependent resources to allow a single Windows image to contain more than just one language. You can obtain more information about the development aspects on MSDN.

The language directory contains files with names such as “notepad.exe.mui”, one for every file they provide resources for (including dlls). These are very basic PE files which contain only a resource directory and are loaded into the address space of the process as they are.

These files are associated to the main file in two ways:

1) By name: just rename the notepad to test.exe and the MUI file accordingly and it still works.
2) Via resource, as we’ll see.

If you open both notepad.exe and its MUI file with a resource viewer, you’ll see they both contain a “MUI” resource. What this data contains can be roughly understood from the MSDN or SDK:

//
// Information about a MUI file, used as input/output in GetFileMUIInfo
// All offsets are relative to start of the structure. Offsets with value 0 mean empty field.
//
 
typedef struct _FILEMUIINFO {
    DWORD       dwSize;                 // Size of the structure including buffer size [in]
    DWORD       dwVersion;              // Version of the structure [in]
    DWORD       dwFileType;             // Type of the file [out]
    BYTE        pChecksum[16];          // Checksum of the file [out]
    BYTE        pServiceChecksum[16];   // Checksum of the file [out]
    DWORD       dwLanguageNameOffset;   // Language name of the file [out]
    DWORD       dwTypeIDMainSize;       // Number of TypeIDs in main module [out]
    DWORD       dwTypeIDMainOffset;     // Array of TypeIDs (DWORD) in main module [out]
    DWORD       dwTypeNameMainOffset;   // Multistring array of TypeNames in main module [out]
    DWORD       dwTypeIDMUISize;        // Number of TypeIDs in MUI module [out]
    DWORD       dwTypeIDMUIOffset;      // Array of TypeIDs (DWORD) in MUI module [out]
    DWORD       dwTypeNameMUIOffset;    // Multistring array of TypeNames in MUI module [out]
    BYTE        abBuffer[8];             // Buffer for extra data [in] (Size 4 is for padding)
} FILEMUIINFO, *PFILEMUIINFO;

You’ll find this structure in WinNls.h. However, this structure is for GetFileMUIInfo, it doesn’t match the physical data.

Offset     0  1  2  3  4  5  6  7    8  9  A  B  C  D  E  F     Ascii   
 
00000000  CD FE CD FE C8 00 00 00   00 00 01 00 00 00 00 00     ................
00000010  12 00 00 00 00 00 00 00   00 00 00 00 EC 6C C4 C4     .............l..
00000020  FF 7C C9 CC F8 03 C7 B3   8C 8A 67 51 11 72 DC 72     .|........gQ.r.r
00000030  80 73 67 9E AB 20 3D FC   AA D4 2F 04 00 00 00 00     .sg...=.../.....
00000040  00 00 00 00 00 00 00 00   00 00 00 00 00 00 00 00     ................
00000050  00 00 00 00 88 00 00 00   0E 00 00 00 98 00 00 00     ................
00000060  20 00 00 00 00 00 00 00   00 00 00 00 00 00 00 00     ................
00000070  00 00 00 00 B8 00 00 00   0C 00 00 00 00 00 00 00     ................
00000080  00 00 00 00 00 00 00 00   4D 00 55 00 49 00 00 00     ........M.U.I...
00000090  00 00 00 00 00 00 00 00   02 00 00 00 03 00 00 00     ................
000000A0  04 00 00 00 05 00 00 00   06 00 00 00 09 00 00 00     ................
000000B0  0E 00 00 00 10 00 00 00   65 00 6E 00 2D 00 55 00     ........e.n.-.U.
000000C0  53 00 00 00 00 00 00 00                               S.......

The first DWORD is clearly a signature. If you change it, the MUI is invalidated and notepad won’t run. It is followed by another DWORD describing the size of the structure (including the signature).

Offset     0  1  2  3  4  5  6  7    8  9  A  B  C  D  E  F     Ascii   
 
00000010                                        EC 6C C4 C4                 .l..
00000020  FF 7C C9 CC F8 03 C7 B3   8C 8A 67 51 11 72 DC 72     .|........gQ.r.r
00000030  80 73 67 9E AB 20 3D FC   AA D4 2F 04                 .sg...=.../.

These are the two checksums:

  BYTE  pChecksum[16];
  BYTE  pServiceChecksum[16];

These two checksums are probably in the same order of the structure. They both match the ones contained in the MUI file and if you change the second one, the application won’t run.

There are no other association criteria: I changed both the main file and the MUI file (by using a real DLL and just replacing the resource directory with the one of the MUI file) and it still worked.

About the second matter mentioned in the beginning: modification of resources. If you try to add/replace an icon to/in notepad.exe you will most likely not succeed. This is because as mentioned in the MSDN:

There are some restrictions on resource updates in files that contain Resource Configuration(RC Config) data: LN files and the associated .mui files. Details on which types of resources are allowed to be updated in these files are in the Remarks section for the UpdateResource function.

Basically, UpdateResource doesn’t work if the PE file contains a MUI resource. Now, prepare for an incredibly complicated and technically challenging hack to overcome this limitation… Ready? Rename the “MUI” resource to “CUI” or whatever, now try again and it works. Restore the MUI resource name and all is fine.

The new build of the CFF Explorer handles this automatically for your comfort.

This limitation probably broke most of the resource editors for Win32. Smart.

Posted in Internals, Reversing | 3 Comments

Preparing a bugfix version of CFF Explorer

It has been many years since the last update of what had started as a hobby side-project when I was 19. I’m sorry that I haven’t updated the CFF for such a long time, given that thousands of people use it every day. A few months ago I stopped working for Hex-Rays to fully dedicate myself to my own company and thus I have decided that I have now the time and the energy (barely) to finally update the CFF.

Over the years I’ve received several bugfix requests, but couldn’t oblige because of the lack of time. If you’re interested that a particular fix goes into the upcoming release, please leave a comment under this blog post or drop me an email to ntcore@gmail.com (feel free to repeat the request, as it might have been lost during the years).

Please don’t include radical changes or improvements, we’ll leave that for later maybe. If your company needs professional PE inspection (not editing), I’d advice you to check out my current commercial product at icerbero.com/profiler, which doesn’t cover ‘just’ the Portable Executable format.

UPDATE: Uploaded new version with the following improvements:

– Dropped Itanium version
– Added ENCLog and ENCMap .NET tables
– Modify resources of system files (MUI limitation)
– Fixed resource loop bug
– Fixed MDTables string overflow bug
– Fixed command line scripting bug
– Fixed ‘Select All’ bug in hex editor
– Fixed missing offset check in .NET tables
– Fixed missing reloc size check
– Fixed scripting handles bug
– Use FTs when OFTs are invalid
– Updated UPX

You can continue to leave comments or send me emails. As soon as there are enough new bug reports, I’ll upload a new version. In time, maybe, some small improvements could be included apart from bug fixes.

Posted in News, Update | Tagged | 33 Comments

Companies on the Verge of a Nervous Breakdown

This is basically a continuation of the previous post about the biggest software delusions of the last decade. In hindsight I would have set rather a different tone for what I wrote, less rant and more technical, but the problem is that I keep things on my mind for a long time and never care enough to write them down leaving them rotting until they come out as technological rants. Anyway, rants are always more fun to read, so let’s keep the style.

In this post I’m going to write about some things left out in the previous one and also comment some things which happened in the meanwhile. You might ask what I have to show for my big claims about complex issues? Very little indeed, but does this make them less true? You’ll be the judge. What I try to offer here is a different perspective on issues which are always analyzed from the marketing or business point of view. Trying to explain these things giving technical reasons, offer in my opinion much better explanations than those fished from the flavor-of-the-day marketing magic hat.

After the last post I was sent per email a “graphic that illustrates the 30 years of innovation at Microsoft and their failures along the way” to link on my blog. I don’t care really about the reasons to ask for a link-to. What I want to say is that this graphic made fun of Microsoft’s failures of the decade just by listing some of them. And this is more or less the usual approach I see taken on the subject even by technical blogs. Which means focusing on the facts, rather than trying to understand them.

Windows Phone

Can we say that Lumia/Windows 7 phones flopped or is it still too soon? I think that after some of the articles I’ve read here and there, we might say that. Lumia phones were pushed out by a big carrier in the US (AT&T) and have been subject of a massive marketing campaign, but still they sold less than the dropped and not advertized N9/MeeGo project.

Nokia is laughable for dropping MeeGo! It can’t be stressed enough, because that would’ve been their only chance to regain market share and they completely blew it.

But why? Surely many reasons stand in the background, but in the end of the day one has to consider what is better on the technical level. If your definition of a better phone is how shiny it looks, then important decisions in the mobile industry shouldn’t be left to you. Many think that Apple is leading the smartphone/table industry because of their marketing strategy. While Apple products are often appealing and polished, this can’t be farther from the truth. Take the desktop market. Is Apple leading there? No. Why? Aren’t the products as polished as their counterparts in the mobile market? Or does Apple strangely suck at marketing their desktop products? Sure, Apple computers are expensive, but so are iPhones!

The first rule here is that great products sell themselves. Clearly marketing helps, but no matter how much marketing money you spend on a product which people don’t want, it will not sell, especially in the long term.

Take MeeGo for instance. I don’t mean that this project would’ve rescued Nokia instantly. Probably they would’ve still to endure 1-2 years of losses along the road, but eventually it would’ve flourished. Of this I’m sure. And considering how many people still buy overpriced N9 phones on ebay, I have a point. The trick is that if you know you have a great project at hands, you invest in it, endure some losses in the strong belief that it will eventually succeed.

One might say that this is exactly what is happening to Nokia and Windows Phone, only that they are betting on the wrong horse. It would be an acceptable point of view if we don’t get hands down on the technology itself. MeeGo was a great project, in my opinion it would’ve been the most advanced OS on the mobile market. Compare this with a repackaged Windows Mobile (not based on NT technology) running Silverlight. Alone the fact that a developer is forced to write his apps in Silverlight or XNA, that alone, would be enough to say “case fuckin’ closed!”. Rumors say Windows Mobile 8 will feature a NT kernel and also that developers will be able to compile C++ code. Seems like after enormous pressure, Microsoft had to give in about C++ (wow, that was totally unexpected… except that I wrote it even a year ago and would’ve been clear to anyone which has even a yota of experience as a developer). Even if it’s true, this is totally messed up. Those developers who lost time to port their C++ code to C# for Windows Phone 7 because C++ would never be a part of the toolchain of that OS lost their time probably for nothing. Also, users which are running Windows Mobile 7 won’t get a free update to the next version, which is incredible since both iOS and Android update their OS even for older phones. It should be pretty clear that when you want to take away market share from the biggest in the game, you must offer at least in part something which is better. Now can someone tell me in what regard a Windows Mobile 7 is better than iOS or Android. Leaving out the hardware of Nokia (and I still think that a smartphone without front-camera is pretty silly nowadays) and just focus on the operating system itself. Is there any advantage? Both iOS and Android have many more apps and of higher quality than WP7. iOS is closed just like Windows Mobile 7, while Android is more easy to hack and play with. Both iOS and Android allow C++ to be compiled, while WP7 doesn’t.

Metro and Windows 8

I’m still calling it Metro, but what is it called now? Microsoft lost the brand to a very famous European wholesale chain store. As a friend of mine said, “I would fire the whole marketing team, if they even can’t come up with a brand name which is not already used”. And not only is it used, but it’s used by a very big chain. It’s like calling your new technology “Walmart”, at least google the name first! (maybe it’s because they were forced to use Bing…)

And enough with these flashy marketing names for development technologies! There’s no reason to pretentiously call something “Silverlight”, it makes it only much more ridiculous when it ends up in the shithouse (or silvershithouse). Use dumb prosaic names like Win32, MFC, Qt! It doesn’t fuckin’ matter! What matters is the code and only the code, and after a year or more of hearing about Metro I haven’t yet seen the code! Granted I don’t look for it, I don’t dig it up from some msdn showcase, I don’t go to conferences, but this isn’t a good enough reason. Just google “metro code snippet” or anything similar and it will be hard to come up with results (I’ve found a preview on msdn which is just a collection of small samples which I was too lazy to view all). The code in this case is like a big mistery waiting to be unveiled…

Except that nobody cares! Apart making fun of Metro, I have yet to see anybody waiting impatiently for Metro or even talking about it (apart making fun of the name etc.).

Microsoft got me personally annoyed to a point in which I don’t follow anything they do anymore. I will have to try sooner or later Windows 8 just to guarantee the stability of my own product, but that’s it. I won’t use it nor play with it. I will skip it completely. And all this is ok, because I think that everything Microsoft is doing is not here to stay. Bing, Silverlight, Windows Phone, WPF, Zune (R.I.P.) etc. And time is confirming my claims. Of course, I can’t predict the future, something might change and change the faith of one of these products as well. But with the current management this is very unlikely.

As for what I read about it, the whole new UI is just jaw-dropping stupid. It’s incredible how this trend of “simplifying UIs” got hold of so many projects. Seen what happened to Gnome 3? Seen what happened to Ubuntu when it came out with Unity? Why is Mint now so popular?

Sure people don’t want to learn again things they already can do, but the problem here is that there’s no damn reason to change something which is working perfectly well and put instead something which is just worse. While humans strive for harmony and unity, these concepts can’t be applied to everything. A desktop is a productivity device. It’s efficient, fast and advanced. While a tablet is a device for consumption, it is ideal to read, play games, browse the web. Having one application at a time visible in a desktop is not only a bad idea, it is idiotic beyond imagination. The key point of a desktop is that it allows complex applications to be used, which would be impossible to use on a tablet: Photoshop, Maya, LibreOffice, Premiere etc. And the whole concept of tiles, which to Microsoft is so brilliant is equally moronic. If Microsoft doesn’t drop the whole concept soon enough after the Windows 8 debacle, I will just drop Windows completely.

The complexity of window managers could be solved much more elegantly by providing a basic mode for users which are not technologically capable.

The betrayal

Developers have been “betrayed” by Microsoft numerous times. Like I mentioned in the previous post, Microsoft deprecated and dished out new technologies at a pace that no one could follow, deprecating in a matter of few years what they just claimed to be their newest direction. Hence confusing and frustrating developers who tried to keep up-to-date, while refusing to significantly update existing and widely used technologies.

Or in the case of Windows Phone 7 the few developers who ported their code to C# now read that Windows Phone 8 will allow C++ code to be compiled. Will they be satisfied by this? Same for the users who bought Windows Phone 7 devices: they will not be able to run applications compiled for Windows Phone 8. Well, at least they got the tiles…

Losing the ground

The one thing which differentiates one OS, apart their own intrinsic quality, from the other is the number of applications which run on it. But the quality of the OS increases once there’s enough interest in it, and that interest is again a result of the applications which run on it. So simple right? While Microsoft knows this rule, it did everything it could to annoy developers. Microsoft tried to bind developers to Windows not by pleasing them, but by dishing out ugly technologies which run only on Windows and using their market share to force developers to use them.

Developers, like anyone else, guard their own interests. Many lost faith in Microsoft completely and started looking for safer havens. This surely is true even for other experts, although I can speak only for my own kind.

For instance, how did Microsoft lose its IE market share? I can’t even start judging IE as a product, apart its history of lack of security, its history of ignoring standards making life hell for web developers, its appalling plugin technology. We’re talking about a product which in 2012 considers clicking on a URL such an important event to signal it emitting a click sound. IE lost its market share by being an inferior product. But do you think that users with no technical ability would’ve downloaded and installed Firefox on their own? No, it’s because more technical people advised them to do so. I did it many times. And this is true for many products which make a name for themselves among technical people and from there they get to the masses. By the way, I consider this the best path for a product, because it means it stands on solid ground.

And finally Valve is starting to sell games on Linux. It can’t be stressed enough how important this is, because if this works out and I can’t see why it shouldn’t, it will change everything. If Microsoft loses the game battle to Linux, then they will lose the OS battle. I think this could be the battle of Stalingrad for Microsoft, because once there are enough games on Linux, there’s no end to the ground which Microsoft can lose. At that point Valve could even come out with its own console and compete against XBox. And since the gaming industry is so powerful, it would mean an overwhelming cash and interest injection into Linux, which everybody involved in that OS could benefit from. Of course, I’m speculating here, but does Microsoft understand the potential here?

I don’t think management does. They are hopping from one technology to another: WinForms, no WPF, no Silverlight, no Metro (replace with the new still unknown name), C#, no HTML5+JS. The problem, in the end, is that if as a CEO you don’t know what you are dealing with, you can’t take informed decisions and you will surround yourself with people you can’t evaluate technically. Your decisions will then only be based upon the appearance, the flashy name, how pretentious the concept sounds or how many millions are spent on marketing. A technically capable CEO is not a guarantee for success, but an incapable one is a recipe for failure. Remember what the former CEO of Pepsi did to Apple? Look at what Elop is doing to Nokia or Ballmer to Microsoft.

Posted in Critique | Tagged , , , , | 14 Comments

The biggest software delusions of the last decade

… or how Microsoft is trying to lose its dominant position.

It’s not only about Microsoft of course. Other big companies have made mistakes, but Microsoft is surely the company which has made most of them in the last ten years. Surely it’s because they can afford it: others can’t make that many without filing for bankrupcy.

Managed development

This is probably the root of most dumb decisions. When Java came out it was appealing to many. Microsoft was already at that time a follower in its decisions and started its .NET development. .NET itself wasn’t a bad idea. At the time I thought it was going to be a part of the ecosystem just like native applications and replacing the obsolete and buggy Visual Basic 6.

Nowadays the reality which we can see is that Microsoft wants their managed technology to take over and become the preferred solution for Windows. From what I could grasp reading some articles about Windows 8 is their interest forcing Desktop developers to write applications that could easily be run/ported to tablets and phones.

But does this infatuation of managed development make sense? To answer this question, it is first necessary to open a parenthesis.

The big innovator of the last decade has been Apple and not because Apple is so smart, but because the others have been clumsy and dumb. I’m talking from a technology perspective here and not from a business/marketing point of view. Apple is obviously very good at marketing, but also it has had a passion about its products. In my opinion, someone who is the CEO of a big IT company should be able to tell the difference between a computer and a toaster. So, yes, this cuts out Ballmer.

I once had been convinced into buying a Zune MP3 player. It was quite expensive (99 euros) considering my previous MP3 players. After trying it out I discovered it didn’t allow me to play tunes based on the directory they were stored inside. I could only play them based on their tags (artist, album etc.). Microsoft seriously expected me to tag now all my tunes? Years before I had taken many of my CDs and ripped them without filling out the tags. Thus, on their player my music was interrupted by my Swedish lessons! On top of that, it wasn’t even a standard USB memory device, it had its own drivers. Let’s just say it’s the worst MP3 player I have ever had. Afterwards I bought a 30 euros Philips player and lived happily ever since. Why did I write this? Because it says a great lot about the care which goes into products. Which in the case above is zero. How is it possible that no one in the process has raised his hand and said “hey, but it’s missing this and that”? It is a great indicator of how certain things are reviewed in Microsoft.

But wait. You could say that the iPod (which I have never used btw) has the same characteristics and lacks this functionality as well. First off, the iPod targets a certain audience and is practically bundled with its iTunes store. This argument can be reduced to: if I wanted an iPod, I would have bought one. And that’s the first big problem of Microsoft, it can’t come up with ideas of his own and doesn’t understand why people prefer the original to the copy. Apple is far from representing perfection in its products, but what is more imperfect than a mere imitation without any advantages?

This was quite a huge parenthesis but you’ll see that I’ll manage somehow to pull the strings together. And if I fail, hey, I can always do some marketing to compensate.

The point of all this is that Apple has been the technology leader of the last ten years. And which are the leading technologies produced by Apple? iPhone, iPad and iPod Touch which on the software side means iOS.

iOS is a mix of C, C++ and Obj-C. Developers write their applications for iOS with Obj-C or through a layer on top of it. Objective-C is basically C with a front-end for the compiler which allows the embedded smalltalk syntax. Thus, Apple is dominating the market with a programming language which comes from the 70s.

Did that create any sort of barrier or limitation for them? It seems not.

Clearly the technological advantages of managed development do not stand in the results for the user, since hardly someone can argue that the Windows Phone 7 experience is much more nicer and appealing than that of an iPhone.

Which means that the advantages have to be on the development side if they can’t be found in the results (more on that later).

Is it easier and more convenient for a developer to use .NET instead of, say, native C++ or Objective-C. If he is just learning to program and doesn’t understand the concept of pointer it might be, although even that isn’t guaranteed. But even if it is, it is not easier or more convenient for a veteran.

Let’s take, for instance, a company which has developed a nice voice recognition library in C++. After 10 years it has become an advanced product and it has been decided to make it available for embedded devices. It is quite easily ported to iOS or Android in just a few weeks, because both allow for native C++ code to be compiled. Not so for Windows Phone 7. Why should the company invest money into rewriting their library for a device which has only like 7% of the market share? Unfortunately, not all companies are so eager to lose money like Microsoft.

Google did the same mistake with Android, but they almost immediately gave in when developers demanded for native code to be compilable and now they’ve got something which doesn’t make much sense. An official Java API and native modules with also a native API, although minimal compared to the Java one. It would have made more sense to offer directly a C/C++ API and let other technologies be built on top, of course. Google, nonetheless, seems much less stubborn than Microsoft.

So managed isn’t more convenient for companies or developers which already have a product and only need to port it, but what about those who are starting their product only now. Is it convenient for them?

The big advantage of Java which made it so appealing in its days was its multiplatform capability. But plain C/C++ are multiplatform. What is needed by a language to become multiplatform is only the API. There couldn’t be any better example of C++ being multiplatform than the Qt framework. And what is less multiplatform than a technology which is intended to run only on Microsoft products? A great deal of code can be ported among iOS and Android. This doesn’t apply to Windows Phone 7. So, even for brand new products it’s highly inconvenient to use .NET, given that it will preclude porting the code to other devices.

Uhm, it doesn’t reflect in the results and it’s a bad investement. What about the inherent technological advantages? There are some pros. It’s sometimes easier to debug managed applications and it’s way easier to analyze them. Also, most importantly, they are compiled just once for different devices. One more advantage which comes to my mind is that they allow reflection. Dynamism isn’t an advantage inherent to managed languages as Objective-C, Qt and lastly my article about Dynamic C++ can prove.

The first three advantages come at a cost. Debugging managed applications isn’t always easier. It’s easier if the problem is in the application itself, it becomes a nightmare when the problem is inside the framework. If that’s the case, the complexity becomes much bigger than debugging native applications. A friend of mine was affected by the large object heap problem. And I haven’t really understood whether the problem has been addressed in .NET 4 or not. Nor do I care, actually. But in that thread Connor Douglas writes on the 16/08/2011:

“This problem has caused me serval sleepless nights and is currently delaying a project from going into production. I don’t understand why microscoft will not look at this problem. I am dealing with heavy image processing application with large arrays.

The application is meant to run periods of years without being restarted.

Very disapointed to find out that this is an issue so late in our development cylce!”

Please note, the problem has been reported on the 18/12/2009. Two years have passed.

From my experience I can only say that for big projects it’s never a good idea to delegate complexity to others without the possibility to intervene directly. Every managed language (especially if the VM is not open-source) makes the developer completely depend on the owner of the managed technology. What can the developer above do other than knock at the door of Microsoft and demand a fix? It’s not like he can choose another .NET framework or patch the framework himself.

It’s easier to analyze .NET applications indeed. It’s also very easy to reverse engineer them as I have showed years ago in my articles about .NET reversing (part 1, part 2). Thanks to the attributes of managed languages themselves and the amount of metadata and type information, .NET applications are de facto open-source. Anyone can take the .NET Reflector and obtain the original source code from any .NET assembly. If anyone thinks protections will prevent this, please read the two articles I linked above. It’s ironic that this is what the N°1 anti open-source company in the world wants: that all applications should become open-source.

The last argument which often I hear used in favour of managed applications is ‘security’. It’s true that a buffer overflow can’t happen in a managed application, unless of course it happens in the VM itself. But I can probably safely say that 95% of buffer overflows in history were caused by unsafe string functions. The fact that C featured an unsafe API can’t be used as an argument in favour of managed languages. And if we consider the remanining risk in native applications, the solution is to tighten the security of processes and hardware. We have seen many new things during the last 10 years: DEP, ASLR, stack cookies, SafeSEH. Already writing a buffer overflow exploit on Windows 7 x64 is anything but trivial. And much more can be achieved without invoking managed technologies.

Garbage Collection

While this may seem bound to managed and scripting languages, it isn’t. Some native languages have garbage collectors as well and it has been the big trend in the first years of 2000. Garbage collection makes a lot of sense in scripting languages, but there it should be confined. I fully made up my mind years ago about this topic and it boils down to 2 very simple conclusions.

1) A garbage collector doesn’t make sense as long as every memory leak is smaller than the memory wasted by a garbage collector.

2) It’s bad for shaping the mentality of developers. Memory is a resource just like a file or a socket. Would you expect someone else to close a file you opened?

The second point is in my view self-evident and the first one is easy to demonstrate. Just consider the large object heap discussed in the previous paragraph and the quotation of the article related to that:

“You’d have thought that memory leaks were a thing of the past now that we use .NET. True, but we can still hit problems. We can, for example, prevent memory from being recycled if we inadvertently hold references to objects that we are no longer using.”

Which actually would be a leak. Just because the framework will free the memory once the application terminates, doesn’t mean it’s not a leak. Even when one is leaking memory in C the operating system will free the leaked memory once the application is terminated. The only advantage here is that the garbage collector doesn’t allow incremental leaks. A pointer in C can be used several times, leaking memory over and over. With a garbage collector of course this can’t happen.

But hardly an application without GC will waste the amount of memory a GC does. There are two kinds of leaks in an application without GC: those which occur rarely and those which occur often. Only those which occur rarely or just once and leak only a small amount of memory will go unnoticed. All the other will be noticed and debugged by the programmer. The small and rare leaks are just less wasteful of memory than a GC and thus from a practical point of view preferable.

Moreover, the GC in .NET could had been implemented much better by making it optional or by giving the developer the ability to delete objects, instead of forcing dereferences and putting silly .dispose() methods here and there.

XAML

While XML is an ideal solution to represent a hierarchy like a UI, things have gotten out of hand with XAML. First thing: it’s the ugliest thing I have ever seen (if we exclude Italian politics).

<Window x:Class="WpfApplication1.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525">
    <Grid>
        <CheckBox Content="CheckBox" Height="16" HorizontalAlignment="Left" Margin="180,49,0,0" Name="checkBox1" VerticalAlignment="Top" />
        <Button Content="Button" Height="23" HorizontalAlignment="Left" Margin="303,55,0,0" Name="button2" VerticalAlignment="Top" Width="75" />
        <CheckBox Content="CheckBox" Height="16" HorizontalAlignment="Left" Margin="102,79,0,0" Name="checkBox2" VerticalAlignment="Top" />
        <GroupBox Header="groupBox1" Height="100" HorizontalAlignment="Left" Margin="183,28,0,0" Name="groupBox1" VerticalAlignment="Top" Width="200">
            <Grid>
                <Grid.ColumnDefinitions>
                    <ColumnDefinition Width="47*" />
                    <ColumnDefinition Width="141*" />
                </Grid.ColumnDefinitions>
                <Button Content="Button" Height="23" HorizontalAlignment="Left" Margin="0,27,0,0" Name="button1" VerticalAlignment="Top" Width="75" Grid.Column="1" AllowDrop="True" ClickMode="Press" />
            </Grid>
        </GroupBox>
        <ListView Height="100" HorizontalAlignment="Left" Margin="52,59,0,0" Name="listView1" VerticalAlignment="Top" Width="120" />
    </Grid>
</Window>

And this is an extremely simple snippet. How does one usually modify complex snippets or do things which can’t be achieved through the designer? In a way which is in line with the .NET mentality. In fact, one big problem in the .NET framework is that its API is most of the times incoherent. Thus, it’s impossible for a programmer to just guess the correct method to use. Here’s a simple example:

// integer to string
str = Convert.toString(i);
 
// string to integer
i = Int32.Parse("1");

If you can’t make even a simple int/string conversion coherent in a framework, then I’d say it’s a problem. Let’s take the same code in Qt:

// integer to string
str = QString::number(i);
 
// string to integer
i = str.toInt();

I can assure you that I didn’t need to look up anything the first time I used QString in Qt. Not so for C#. Nobody can just guess the methods.

The developer in this case has to search for a snippet on the internet, which could be called Copy and Paste development. It’s the same with XAML of course. Unless you rely entirely on a designer, but as with HTML pages I rarely see complex ones done with a designer, so that one has to go with the raw XML.

Forcing programmers to be confronted with XML to make their UIs is the worst idea ever. This has root in the typical university way of thinking. Microsoft made big announcements that with XAML finally programmers had no longer to focus on UIs, which could now be left to the graphical people.

What a great idea! I wonder what kind of application is completely separated between its UI and code so that the graphical people can just proceed doing their work without worries. When I try to visualize such an application in my mind I see either an animated presentation which doesn’t do anything or a dialog box with three buttons and an image. Once I start to think about anything more complex than that, I strangely can’t see any longer the separation between UI and code.

UIs are made of complex graphical components, often custom components. Who needs someone meddling with the UI just to redispose some buttons or add some graphical elements? Does this really make it worthwile talking about a separation of UI and code?

And anyway, even admitting there could be a separation between the two, I really wonder how many companies do have dedicated team members just for UIs. Even small companies do exist. And I know this may come as a surprise to you, Microsoft, but even individual developers do exist. Amazing, isn’t it?

A typical academical idea which looks good on paper. For three seconds.

Silverlight

I don’t know whether it is/will be much used. I heard many times of Microsoft pushing it by re-doing important websites for free using Silverlight.

As much as I don’t like Flash, I would never ever invest in Silverlight, much rather in Flash. First off, Flash is much more used than Silverlight and runs on basically every operating system and will surely do so even in the future if Microsoft doesn’t really decide to buy Adobe (and that by the way should be stopped by the antitrust which seems only to be interested in knowing whether Microsoft is imposing Internet Explorer to Windows users).

The new Flash stands in no way behind Silverlight in terms of features for what its purpose is. Also, this is typical of the behavior of Microsoft lately. There’s no place for others on the market, they themselves need to be everywhere. Not that competition itself is bad for Flash, quite the contrary, but it should be left to others!

Why? Because when a company bases its business on a technology like that, it really earns on the product. So it must ensure customers are satisfied and that it works on every platform just as advertised.

I don’t believe that Microsoft really cares about the revenue generated by Silverlight itself. I think it is much more important to them to bind programmers and applications to their core business, which is operating systems.

I believe that in general frameworks should be developed by third parties for these exact reasons, but this is even more true for something which really should work everywhere like a web-embedded technology.

Windows Phone 7

Windows Phone 7 is highly recommended to anyone who wishes to start developing.

On an iPhone.

Yes, precisely. After two hours spent wrestling Silverlight/XAML into displaying a trivial layout on a Windows Phone, any normal programmer will immediately buy an iPhone. Even the odd smalltalk syntax doesn’t look so bad now, does it? Quite the contrary! It seems highly reasonable and elegant. How only could it look bad before?

Apart from that, I don’t know whether they improved things lately, but at the time it came out it lacked an API for practically anything, even the most trivial things like SQLite support. And of course it can’t be added manually, since it can’t run native modules as discussed before.

It doesn’t seem a highly intelligent move to release a smartphone after anybody else, in delay of years and then bring out something so immature. I honestly hope that the Windows Phone crashes and burns. Not only because it would teach Microsoft a humility lesson (if they can actually learn one), but also because it would stop the delusion of forcing desktop developers in rethinking everything for the mobile market, which is the latest Microsoft trend judging by the articles about Windows 8 I have skimmed through these weeks.

For now it’s unsure how it will end. Although Windows Phone has already been declared a failure, Microsoft has launched a partnership with Nokia and will invest even more on it. Like usual. If the product is not bought, then it can only be that we haven’t spent enough on it. Let’s do some marketing!

Cloud computing

This word has acquired so many meanings that if Hegel was still alive he would use it too.

Which also means that it makes no longer sense using it if not for marketing purposes like Apple just did with its iCloud. Which actually is just a service like DropBox with a fancy name.

The range of meanings the word has acquired includes basic server technology, synchronization, distributed computing, web based applications (which probably is the most authentic meaning).

If web based applications are meant, then clearly the idea is stupid. Having every application on a remote computer is not only the worst thing for privacy, but is also slow, costly (for the company), inefficient and a sucky user experience.

Many have written about this topic and I certainly am not the one who can shed additional light on it, but I mentioned it anyway just for completeness.

Simplicity

This paradigm has just got to go.

I have installed Ubuntu on the computer of some extremly unskilled people. And they use it. They browse the web, check their email, watch movies, write documents with Libre Office and even move files to/from memory sticks.

If these people can do it, then I can probably train a penguin to use Ubuntu.

Granted that I’d probably need to find a larger keyboard for his fins; but that’s all.

There’s just no more room for simplifying without removing functionality. On the other hand, Microsoft would simplify my life a great deal if they finally decided to implement a search functionality in the list of installed services (and that’s not the only place where a search functionality is lacking). Or by introducing a file search that actually has any kind of purpose. That would simplify _my_ life a lot, thank you. And I’m pretty sure that after 20 years these improvements can be safely done without the risk of juggling too many things at once. But I might be wrong. Who knows…

Bing, MSN Live, failed Yahoo acquisition

I can’t put it better than Charlie Brooker once did (please read with British accent):

“I suppose, you know, theoretically you could watch the royal wedding on ITV not the BBC, just like you could search for things on Bing instead of Google, or eat Daddy’s ketchup instead of Heinz. It’s possible, but it’s not _normal_. It borders on perversion. You could watch it on Sky News but that’s like searching Hellman’s Ketchup on Yahoo.”

If you don’t get right at once something which was lame from its conception, just give up. Sometimes in life it is very healthy to give up for shaping one’s character. Behaving like a pestering child who stumps on the ground and screams “BUT I WANT IT! I WANT IT!” doesn’t seem to me a winning strategy.

Social networks (Facebook, Google+, Wave, MySpace etc.)

Yes, I know that Facebook is an immense business right now. But I have always seen it as a bubble and I hope for everybody’s sake that it really is. Maybe one day humanity will realize that putting sensitive information in the hands of a corporation is not such a smart idea. Or maybe not. Anyway the topic has deserved to be in the list, because an infinity of money has been invested (by others) into social networks with no results.

Conclusions

As we have seen other companies do mistakes, but no one as much as Microsoft. A company behaving like a retarded giant who is buffled by others passing by him running and who starts its running motion in an attempt to catch them without noticing that the strings of his shoes have been tied together.

More money, more marketing. Never passion or care. It has always to be the latest toy. Then as soon as it has been played with for two seconds it is thrown to the ground and then again focusing on the next toy.

What’s a better example for this behavior than Skype? Was it really necessary to buy it? Couldn’t a partnership suffice? Won’t it more realistically prevent smarter acquisitions in the future for lack of money or intervention of the antitrust?

And can developers really follow Microsoft?

.NET with WinForms, big change. Lot of code needs to be rewritten. But wait what is WPF? XAML needs now to be used for the UIs? Ah. And what’s Silverlight? Should I use WPF or Silverlight? What are the differences? And all the WinForms code? Obsolete??… HEY, WHAT IS METRO?

By the way, is it just me or Metro Apps sounds a lot like Metro Sexual? Sorry, but South Park burned that brand for me.

Anyway it is clear that everything from Microsoft comes out touched by too many people, too fast and without the necessary dedication and care which in my opinion are essential to great products.

Don’t get me wrong, it’s not like I’m saying that Windows 8 will be the end of Microsoft. Of course not. Probably it will be disliked just like Vista and afterwards things will be re-improved like with Windows 7. The problem is that Microsoft is losing time. A lot of time. Sooner or later operating systems such as OSX and Linux will completely catch up with what really matters in a desktop, which apart from its own features, are the applications which run on it.

I wonder when it will be possible to look after a new release on Windows hoping for improvements, instead of hoping that it won’t be worse than the current version.

Moreover, Windows could be improved to an endless extent without re-inventing the wheel every 2 years. If the decisions were up to me I would work hard on micro-improvements. Introduce new sets of native APIs along Win32. And I’d do it gradually, with care and try to give them a strong coherency. I would try to introduce benefits which could be enjoyed even by applications written 15 years ago. The beauty should lie in the elegance in finding ingenious solutions for extending what is already there, not by doing tabula rasa every time. I would make developers feel at home and that their time and code is highly valued, instead of making them feel like their creations are always obsolete compared to my brand new technology which, by the way, nobody uses. I also would like them to believe that I wouldn’t meddle with their business once it becomes interesting enough, be it virtual machines, web applications, search engines, browsers, VOIP etc. Just name one thing Microsoft hasn’t been involed into during the last ten years.

I can’t say how much Microsoft will lose of its dominant position in the years ahead. Certainly it is working very hard on it and hard work sometimes pays off.

Posted in Critique, Programming | 45 Comments

Software Theft FAIL

… Or why stealing software is stupid (and wrong). A small guide to detect software theft for those who are not reverse engineers.

Under my previous post the user Xylitol reported a web-page (hxyp://martik-scorp.blogspot.com/2010/12/show-me-loaded-drivers.html) by someone called “Martik Panosian” claiming that my driver list utility was his own.

Now, the utility is very small and anybody who can write a bit of code can write a similar one in an hour. Still, stealing is not nice. :)

Since I can’t let this ignominious theft go unpunished :P, I’ll try at least to make this post stretch beyond the specific case and show to people who don’t know much about these sort things how they can easily recognize if a software of theirs has been stolen.

In this specific case, the stolen software has been changed in its basic appearance (title, icon, version information). It can easily be explored with a software such as the CFF Explorer. In this case the CFF Explorer also identifies the stolen software as packed with PE Compact. If the CFF Explorer fails to recognize the signature, it’s a good idea to use a more up-to-date identification program like PEiD.

However, packing an application to conceal its code is a very dumb idea. Why? Because packers are not meant to really conceal the code, but to bind themselves to the application. What is usually difficult to recover in a packed application is its entry-point, the IAT and other things. But the great majority of the code is usually recoverable through a simple memory dump.
Just select the running application with an utility such as Task Explorer, right click to display the context menu and click on “Dump PE”.

Now the code can be compared. There are many ways to compare the code of two binaries. One of the easiest is to open it with IDA Pro and to use a binary diffing utility such as PatchDiff2. If the reader is doing this for hobby and can’t afford a commercial license of IDA Pro, then the freeware version will do as well.

Just disassemble both files with IDA Pro and save one of the idbs. Then click on “Edit->Plugins->PatchDiff2″ and select the saved idb.

Let’s look at a screenshot of the results:

Click to enlarge

As it is possible to see, not only were the great majority of functions matched, but they also match at the same address, which proves beyond doubt that they are, in fact, the same application.

It’s important to remember that a limited number of matches is normal, because library functions or some basic ones may match among different applications.

A comparison of two applications can even be performed manually with IDA Pro, just by looking at the code, but using a diffing utility is in most cases the easiest solution.

Posted in Programming, Trivia | 31 Comments

A malware with my name

There’s a malware circulating that contains my name in its version information. I’m, of course, not the author (putting one’s own name in the version info would be brilliant). I’m clarifying, as three people already contacted me about it since yesterday.

It was probably done on purpose and it’s not the result of a random generation of different version info, as I suspect. What the author/s of this malware ignore, is that they made me stumble on an additional technique against malware, that’ll probably damage their business and force them to work more.

Given my very limited amount of spare time, it’s too soon to discuss this.

Posted in News | Tagged | 15 Comments

CFF Explorer 7.9 & Secunia

Today I’ve received a Secunia report email about a buffer overflow vulnerability in the CFF Explorer. I was quite amused =). I mean, I usually get emails sent me by users about bugs in the CFF, never got an email by Secunia before.

However, it’s always good to get bug reports. The bug itself was related to a string overflow in the resource editor. I put string safe functions quite some time ago in the old kernel of the CFF, but apparently I missed one.

So, since I had already the project open to fix this bug, I also added support for .NET unoptimized metadata streams. Which is the most important new feature in this release.

Posted in Update | Tagged | 6 Comments

IDAQ: The result of 7 months at Hex-Rays

It is not a mistery that Hex-Rays is preparing for the IDA 6.0 beta program. In this post I’ll write a bit about my personal, behind the scenes, experience with the project.

It took me 7 months to port/rewrite the old VCL GUI of IDA Pro. The new GUI, as it had been already anticipated months ago on the official blog, is Qt based.

The main difficulties I have faced were mostly not of technical nature, although it was a complex task, but psychological ones. It took a lot of patience and it was very difficult every morning to go to work and to have to see an unfinished product with the old GUI reminding myself how much was still to do.

What follows is a rough roadmap of my work, I’ll mention only the milestones and not the hundreds of smaller parts. It has to be noted that at least for what concerns the docking I wrote most of it before joining Hex-Rays to accelerate the development of the actual GUI once in the company. While Qt has a docking system, it is not as advanced as the one used by the VCL GUI, which is a commercial control. So, I wrote a docking system myself in order to offer all the advanced features the old GUI had.

January: first impact with the code. Took me a week to grasp the initial concepts to start. Basically at the end of the month I could display disassembly and graph mode of a file. Also, hints, graph overview and disassembly arrows were implemented.

February: implemented chooser and forms (which I actually completely changed internally, that’s why I had to improve them again later on to obtain better backwards compatibility).

March: marathon month. Implemented every day one or more dialogs/views such as: hex view, cpu regs view, enum view, struct view, options, navigation band, colors, etc. etc. More than 30, some very easy, some advanced controls such as the hex view or the cpu regs view.

April: two weeks to finish the docking and smaller things.

May: two weeks to implement the desktop part (the ability to save/restore layouts and options) and smaller things.

June: fixes, help system and improved the forms implementation.

July: Hundreds of fixes for the beta.

While there will be still bugs to fix, I consider the project as completed and I wrote this post to close a chapter for myself.

Posted in Uncategorized | 5 Comments

Rebel.NET & Phoenix Protector Update

Both suffered from a bug where they’d fail in case the assembly to reproduce/protect didn’t have a .rsrc section. Since at the time I wrote the code all .NET assemblies had a .rsrc section, I took it for granted and didn’t include specific checks.

Posted in Update | Tagged , | 4 Comments