Top Answer:
Unless a hardcore windows systems hacker comes along, you're not going to get more than partisan comments (which I won't do) and speculation (which is what I'm going to try).
1. File system - You should try the same operations (including the dir) on the same filesystem. I came across this which benchmarks a few filesystems for various parameters.
2. Caching. I one tried to run a compilation on Linux on a ramdisk and found that it was slower than running it on disk thanks to the way the kernel takes care of caching. This is a solid selling point for Linux and might be the reason why the performance is so different.
3. Bad dependency specifications on windows. Maybe the chromium dependency specifications for Windows are not as correct as for Linux. This might result in unnecessary compilations when you make a small change. You might be able to validate this using the same compiler toolchain on Windows.
There are parts of Windows that are implemented just the same as Linux, and parts that are faster. Some parts are slower, notably the file system. But there's more to Windows than just the file system.
So, I'd say that's why it's not constructive.
More details here: http://code.google.com/p/msysgit/issues/detail?id=320
I (and others) have put a lot of effort into making the Linux Chrome build fast. Some examples are multiple new implementations of the build system ( http://neugierig.org/software/chromium/notes/2011/02/ninja.h... ), experimentation with the gold linker (e.g. measuring and adjusting the still off-by-default thread flags https://groups.google.com/a/chromium.org/group/chromium-dev/... ) as well as digging into bugs in it, and other underdocumented things like 'thin' ar archives.
But it's also true that people who are more of Windows wizards than I am a Linux apprentice have worked on Chrome's Windows build. If you asked me the original question, I'd say the underlying problem is that on Windows all you have is what Microsoft gives you and you can't typically do better than that. For example, migrating the Chrome build off of Visual Studio would be a large undertaking, large enough that it's rarely considered. (Another way of phrasing this is it's the IDE problem: you get all of the IDE or you get nothing.)
When addressing the poor Windows performance people first bought SSDs, something that never even occurred to me ("your system has enough RAM that the kernel cache of the file system should be in memory anyway!"). But for whatever reason on the Linux side some Googlers saw it fit to rewrite the Linux linker to make it twice as fast (this effort predated Chrome), and all Linux developers now get to benefit from that. Perhaps the difference is that when people write awesome tools for Windows or Mac they try to sell them rather than give them away.
Including new versions of Visual Studio, for that matter. I know that Chrome (and Firefox) use older versions of the Visual Studio suite (for technical reasons I don't quite understand, though I know people on the Chrome side have talked with Microsoft about the problems we've had with newer versions), and perhaps newer versions are better in some of these metrics.
But with all of that said, as best as I can tell Windows really is just really slow for file system operations, which especially kills file-system-heavy operations like recursive directory listings and git, even when you turn off all the AV crap. I don't know why; every time I look deeply into Windows I get more afraid ( http://neugierig.org/software/chromium/notes/2011/08/windows... ).
NTFS Performance Hacks - http://oreilly.com/pub/a/windows/2005/02/08/NTFS_Hacks.html
There are a plethora of disk benchmarking tools - I doubt that they consistently show 40x differences.
Hooves -> horses, and all that.
It's an online SQL tool for analyzing the stackoverflow data dump (last update was Sept 2011; they're quarterly http://blog.stackoverflow.com/category/cc-wiki-dump/). It's very cool, but curiously hard to find. There's a "[data-explorer]" tag at meta.stackoverflow: http://meta.stackoverflow.com/questions/tagged/data-explorer
The PostHistory table probably records why posts were closed.
dir /s > c:\list.txt
is piping it into a file. Where does the speed of the terminal affect that (in any significant fashion)? I know what you're getting at - tar --verbose can slow things down for me by sometimes a factor of 2 (for huge tarballs), but I don't think it's an issue in this situation.Unlike with Linux it's quite difficult to perform piecemeal inclusion of system header files because of the years of accumulated dependencies that exist. If you want to use the OS APIs for opening files, or creating/using critical sections, or managing pipes, you will either find yourself forward declaring everything under the moon or including Windows.h which alone, even with symbols like VC_EXTRALEAN and WIN32_LEAN_AND_MEAN, will noticeably impact your build time.
DirectX header files are similarly massive too. Even header files that seem relatively benign (the Dinkumware STL <iterator> file that MS uses, for example) end up bringing in a ton of code. Try this -- create a file that contains only:
#include <vector>
Preprocess it with GCC (g++ -E foo.cpp -o foo.i) and MSVC (cl -P foo.cpp) and compare the results -- the MSVC 2010 version is seven times the size of the GCC 4.6 (Ubuntu 11.10) version!The default cluster size on NTFS volumes is 4K, which is fine if your files are typically small and generally remain the same size. But if your files are generally much larger or tend to grow over time as applications modify them, try increasing the cluster size on your drives to 16K or even 32K to compensate. That will reduce the amount of space you are wasting on your drives and will allow files to open slightly faster.
is wrong. When you increase cluster size you will definitely not "reduce the amount of space you are wasting". 100B file will still occupy the whole 16KB ( so you will waste 15.9KB on it instead of 3.9KB with 4KB clusters.
Also I would be very careful with taking advice like that from an article which is 6 years old (before introduction of Win7 or XP SP3!)
http://www.joelonsoftware.com/articles/fog0000000319.html
In Joel's opinion it is an algorithm problem. He thinks that there is an O(n^2) algorithm in there somewhere causing trouble. And since one does not notice the O(n^2) unless there are hundreds of files in a directory it has not been fixed.
I believe that is probably the problem with Windows in general. Perhaps there are a lot of bad algorithms hidden in the enormous and incredibly complex Windows code base and they are not getting fixed because Microsoft has not devoted resources to fixing them.
Linux on the other hand benefits from the "many eyes" phenomenon of open source and when anyone smart enough notices slowness in Linux they can simply look in the code and find and remove any obviously slow algorithms. I am not sure all open source software benefits from this but if any open source software does, it must certainly be Linux as it is one of the most widely used and discussed pieces of OS software.
Now this is total guesswork on my part but it seems the most logical conclusion. And by the way, I am dual booting Windows and Linux and keep noticing all kinds weird slowness in Windows. Windows keeps writing to disk all the time even though my 6 GB of RAM should be sufficient, while in Linux I barely hear the sound of the hard drive.
NTFS also fragments very badly when free space is fragmented. If you don't liberally use SetFilePointer / SetEndOfFile, it's very common to see large files created incrementally to have thousands, or tens of thousands, of fragments. Lookup (rather than listing) on massive directories can be fairly good though - btrees are used behind the scenes - presuming that the backing storage is not fragmented, again not a trivial assumption without continuously running a semi-decent defragmenter, like Diskeeper.
According to this document (http://i-web.i.u-tokyo.ac.jp/edu/training/ss/lecture/new-doc...) it would appear that directory entries have one extra level of indirection and share space with the page cache and hence can be pathologically evicted if you read in a large number of files; compiling/reading lots of files for example.
On Linux however the directory entry cache is a separate entity and is less likely to be evicted under readahead memory pressure. Also it should be noted is that Linus has spent a largish amount of effort to make sure that the directory entry cache is fast. Linux's inode cache has similar resistance to page cache memory pressure. Obviously if you have real memory pressure from user pages then things will slow down considerably.
I suspect that if Windows implemented a similar system with file meta data cache that was separate from the rest of the page cache it would similarly speed up.
Edit: I should note, this probably wouldn't affect linking as much as it would affect git performance; git is heavily reliant on a speedy and reliable directory entry cache.
[1] http://msdn.microsoft.com/en-us/library/ms940846(v=winembedd...
[2] http://oreilly.com/pub/a/windows/2005/02/08/NTFS_Hacks.html (#8)
MSBuild does suck in that there is little implicit parallelism, but you can hack around it. I have a feeling that the Windows build slowness probably comes from that lack of parallelism in msbuild.
As for directory listings it may help to turn off atime, and if it's a laptop enable write caching to main memory. I'm not quite sure why Windows file system calls are so slow, I do know that NTFS supports a lot of neat features that are lacking on ext file systems, like auditing.
As for the bug mentioned, it's perfectly simple to load the wrong version of libc on linux, or hook kernel calls the wrong way. People hook calls on Windows because the kernel is not modifiable, and has a strict ABI, it's a disadvantage if you want to modify the behavior of Win32 / Kernel functions, but a huge advantage if you want to write say, graphics drivers and have them work after a system update.
Microsoft doesn't recommend hooking Win32 calls for the exact reasons outlined in the bug, if you do it wrong you screw stuff up, on the other hand, rubyists seem to love the idea that you can change what a function does at anytime. I think they call it 'dynamic programming'. I can make lots of things crash on Linux by patching ld.config so that a malware version of libc is loaded. I'd hardly blame the design of Windows when malware has been installed.
Every OS/Kernel involves design trade offs, not every trade off will be productive given a specific use case.
Also, Joel's complaint was about the Windows Explorer GUI (specifically, opening a large recycle bin takes hours). Cygwin `ls` is using a completely different code path. Your experiment does suggest that Joel's problem is in the GUI code, though, and not the NTFS filesystem code.
Even better, you can do cross compiling with MinGW. So if your toolchain dosn't perform well on Windows, just use GCC as a cross compiler and build your stuff on a Linux or BSD machine. Then use Windows for testing the executable. (On smaller projects, you usually don't even need Windows for that, since Wine does the job as well.)
(Full disclosure: I'm the maintainer of a Free Software project that makes cross compiling via MinGW very handy: http://mingw-cross-env.nongnu.org/)
Access times: According to the comments on that blog entry (and according to all search hits that I could find, see for example [1]) atime is already disabled by default on Windows 7, at least for new/clean installs.
1: http://superuser.com/questions/200126/are-there-any-negative...
I also remember that I was able to create file copy utility in assembly as a homework assignment that was couple times faster than windows/dos copy command.
The only two reasons I can think of that explain this are: 1 - noone cares about windows fileystem performance. 2 - someone decided that it shouldn't be too fast.
I don't know if something similar exists under windows (I suppose it doesn't).
$ cat /Library/LaunchDaemons/com.nullvision.noatime.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.nullvision.noatime</string>
<key>ProgramArguments</key>
<array>
<string>mount</string>
<string>-vuwo</string>
<string>noatime</string>
<string>/</string>
</array>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
I don't know if there's anything like relatime.This has a double effect: there are less developers who can attempt to fix problems with a driver and second, the incentive seems smaller - make general I/O 0.5% faster and you're a hero of the public, fix a critical problem in an unpopular device driver and maybe 1 person notices.
reg add "HKLM\System\CurrentControlSet\Control\Session Manager\Memory Management" -v DisablePagingExecutive -d 0x0 -t REG_DWORD -f
Disable it with: reg add "HKLM\System\CurrentControlSet\Control\Session Manager\Memory Management" -v DisablePagingExecutive -d 0x1 -t REG_DWORD -f
More discussion here: http://serverfault.com/questions/12150/does-the-disablepagin...
The easiest way (if you have enough memory) is just disable the paging file.
Moreover, the optimal buffer size is different for small and large files, maybe Windows is not optimized for large size like a DVD image.
Our software builds everyday on FreeBSD, Linux and Windows on servers that are identical.
The windows build takes 14 minutes. The FreeBSD and Linux build take 10 minutes (they run at almost identical speed).
Check out is more than twice slower on Windows (we use git).
Debug build time is comparable 5 minutes for Windows, 4 minutes 35 on Linux.
Release build time is almost 7 minutes on Windows and half that on Linux.
VS compiles more slowly than gcc but overall it's a better compiler. It handles static variables better and is not super demanding about typenames like gcc is. Also gcc is extremely demanding in terms of memory. gcc is a 64-bit executable, Visual Studio is still a 32-bit execuable. We hope Microsoft will fix that in Visual Studio 2011.
Its easier to parallelize gmake than Visual Studio, which also explains the better Linux build time. Visual Studio has got some weird "double level" mulithreading which is eventually less efficient than just running the make steps in parallel as you go through your make file.
However our tests run at comparable speed on Linux and Windows and the Windows builds the archive ten times faster than Linux.
Well, if that was true, then you could just buy the better tool and use it, right? I suspect they don't exist because
a) On Linux, you have the existing linker to build your better one on. On Windows, you'd have to write your own from scratch making it less appealing for anyone (only people who'd buy it were those running huge projects like Chrome)
b) What you said about the file system itself just being plain slow.
PS: (Long time follower of evan_tech - nice to see you popup around here :) )
I pointed it out mainly because terminals can have a significant impact on performance, because dumping millions of lines a second isn't their intended purpose,[1] whilst the shell can be reasonably expected to do that.
Having it entirely as a shell built-in possibly actually better than the equivalent '/bin/ls > somefile' since it doesn't need to context switch back and forth as the stdout buffer fills up and the shell has to write it.
[1] I recall there being a Gentoo-related thread about why "Gentoo-TV" -- having the output of gcc scroll past as your background with a transparent xterm -- was actually slowing down package builds significantly.
http://blogs.technet.com/b/filecab/archive/2006/11/07/disabl...
As to actual complexity curve (which, knowing what I do about NTFS, I'm fairly sure is O(n log n)), I don't really care about it; since it hasn't shown up in a serious way at n=100000, it's unlikely to realistically affect anyone badly. Even if 1 million files (in a single directory!) took 18.5 seconds, it wouldn't be pathological. Other limits like disk bandwidth and FS cache size seem like they'd hit in sooner.
On the other hand the paper also shows how ridiculous it is to talk about the wasted space - look at figure 14, files smaller than 16KB occupy ~1% of all space. Even if we waste space 4:1 it's still ridiculously small amount of space
[1] http://www.usenix.org/events/fast11/tech/full_papers/Meyer.p...
As for my homework "copy" command, I know that it is not fully replacement to file copy windows command, but if copy operations takes >10min, all those checks and additional tasks shouldn't make IO bound operation take couple time longer than what some student implemented as homework.
2) Linux forks significantly faster than anything else I know. For something like Chromium the compiler is forked bazillion times and so is the linker and nmake and so on so forth.
3) Linux, the kernel, is heavily optimized for building stuff as that's what the kernel developers do day in and day out - there are threads on LKML that I can't be bothered to dig out right now but lot of effort goes in to optimizing for kernel build workload - may be that helps.
3) Linker - stock one is slower and did not do the more costly optimizations until now so it might be faster because of doing lesser than the MS linker that does incremental linking, WPO and what not. Gold is even faster and I may be wrong but I don't think it does what the MS linker does either.
4) Layers - Don't know if Cygwin tools are involved but they add their own slowness.
What we weren't able to compare between the compilers was link-time optimization and profile-guided optimization since Microsoft crippled VC express by removing these optimizations.
So when someone makes claims that 'VC++ generates significantly better code than GCC' I want to see something backing that up. Had I made a blanket statement that 'GCC generates significantly better code than VC++' someone would call me on backing up that aswell, and rightly so.
To benchmark the maximum shell script performance (in terms of calls to other tools per second), try this micro-benchmark:
while true; do date; done | uniq -c
Unix shells under Windows (with cygwin, etc.) run about 25 times slower than OS X.If one were optimizing Windows performance, none of the specific areas used as examples would receive much attention given user demographics. What percentage of Windows users use the command line, much less compile C programs, never mind using "cmd" shells to do so?
Windows command line gurus will be using Powershell these days, not the legacy encumbered "cmd" - elsewise they are not gurus.
Disclaimer: I work at Microsoft, but this is my hazy recollection rather than some kind of informed statement.
Also, if you were doing anything heavily floating-point, MSC 2010 would be a bad choice because it doesn't vectorize. Even internally at Microsoft, we were using ICC for building some math stuff. The next release of MSC fixes this.
The Windows desktop GUI system is more stable than anything else out there (meaning that it's not going to change drastically AND that it's a solid piece of software that just works) and it's as flexible as I need it to be, so that's why I stick with Windows. With virtual machines, WinSCP, Cygwin and other similar utilities, I have all the access to *nix that I need.
The reverse, incidentally, was usually okay. If you could build it with MSBuild, it usually worked in Visual Studio unless you used a lot of custom tasks to move files around.
I personally believe the fact that Visual Studio is all but required to build on Windows is one of the single most common reasons you don't see much OSS that is Windows friendly aside from those that are Java based.
It would have been interesting to compare the quality of the respective compiler's PGO/LTO optimizations (particularly PGO given that for GCC and ICC code is sometimes up to 20% faster with that optimization) but not interesting enough for us to purchase a Visual Studio licence.
And yes we use floating point math in most of our code, and if MSC doesn't vectorize then that would certainly explain worse performance. However this further denies the blanket statement of 'VC++ generates significantly better code than GCC.' which I was responding to.
while photoshop isn't on linux, there are plenty of replacements for that unless he's doing print work, which I don't think is the case, as photoshop isn't the beginning and end for print. (actually, TBH, photoshop is pretty shit for pixel work.)
Also maya is available for linux, autodesk just doesn't offer a free trial like they do with windows/mac os. (Including the 2012 edition.)
With no offence intended to the 3dsmax crew, as it has it's merits, but a sufficiently competent maya user won't find much use for 3dsmax.
Apple's switching from gcc to clang/llvm, and doing a lot of work on the latter, which is open source.
More like Linux benefits from "many budgets and priorities". If someone at Microsoft spots an obviously slow algorithm, they may not be allowed to fix it, rather than working on whatever they're supposed to be working on, which probably doesn't include "fixing shipped, mature code that pretty much works in most cases."
On the Linux side, someone can decide it's really freakin' important to them to fix a slow bit, and there's little risk of it being a career-limiting move.
If you make a claim such as 'GCC produces significantly worse code than alternate compiler A' then it's completely reasonable to ask for something to support it. Tone wise perhaps the post could have been improved, but the principle stands.
It takes all of 2 minutes to try this experiment yourself (plus ~8 minutes for the download).
1. Download chromium http://chromium-browser-source.commondatastorage.googleapis....
2. Unzip to a directory
3. Create this batch file in the src directory, I called mine "test.bat"
echo start: %time% >> timing.txt
dir /s > list.txt
echo end: %time% >> timing
4. Run test.bat from a command prompt, twice.Paste your output in this thread. Here is mine:
start: 12:00:41.30
end: 12:00:41.94
start: 12:00:50.66
end: 12:00:51.31
First pass: 640ms; Second pass: 650msI can't replicate the OP's claim of 40000ms directory seek, even though I have WORSE hardware. Would be interested in other people's results. Like I said, it only takes 2 minutes.
Oh, wait, you mean for Linux? Good luck with that. The sad thing is that either providing drivers for Linux or providing detailed enough specs that drivers can be written by the OSS community offers little for no payback for the effort. As a Linux user, I feel lucky that any hardware manufacturers even acknowledge people might be running something other than Microsoft.
When you're in bed with Microsoft, you probably have even more reasons to blow off the OSS community. The sweaty chair-chucker wouldn't like it, and you don't want to get him angry. You wouldn't like him when he's angry.
(For comparison, I have an older checkout on this Linux laptop. It's around 145ms to list, and it's 123k files. The original post mentions 350k files, which is a significantly different number. It makes me wonder if he's got something else busted, like "git gc" turned off, creating lots of git objects in the same directory producing some O(n^2) directory listing problem. But it could just as well be something like build outputs from multiple configurations.)
There's also the issue that it seems to slow down exponentially with directory size. In short, worst FS for building large software.
As for the OP's build time complaint about XCode - don't do that. Use the make build instead. Not only does XCode take forever to determine if it needs to build anything at all, it caches dependency computations. So if you do modifications outside of XCode, too, welcome to a world of pain :) (I know evmar knows, but for the OP: Use GYP_GENERATORS=make && gclient runhooks to generate makefiles for Chromium on OSX)
For the most part, Linux just doesn't do that. Obviously there are performance bugs, but they get discussed and squashed pretty quickly, and the process is transparent. On the occasions I'm really having trouble, I can generally just google to find a discussion on the kernel list about the problem. On windows, we're all just stuck in this opaque cloud of odd behavior.
You don't necessarily have to use VS to develop on windows. Mingw works quite well for a lot of cross-platform things and it is gcc and works with gnu make.
My experience with porting OSS between Windows and Linux (both ways) has been that very few developers take the time out to encapsulate OS specific routines in a way that allows easy(ier) porting. You end up having to #ifdef a bunch of stuff in order to avoid a full rewrite.
This is not a claim that porting is trivial. You do run in to subtle and not-so-subtle issues anyway. But careful design can help a lot. Then again this requires that you start out with portability in mind.
So you assume that Windows8 metro-mode won't really catch on? Also comparing OSX to Windows over the past 10 years, it's Windows that has changed more drastically, so both future and past evidence to the contrary...
If you look at the indexing options in the Vista control panel and click the "Advanced" button, you'll find a dialog box with a "File Types" tab. This horrible dialog may show you (it did for me) that some file types (i.e., filename extensions) are deliberately excluded from indexing. For some reason. You know, because you may not want to find certain things when you look for them. I guess.
You'll also find the world's worst interface for specifying what kinds of file should be indexed by content. But never mind.
If searching by filename and/or path is all you're after, check out Everything:
If you're not using Windows as an Administrator, (and you shouldn't be) Everything won't seem very polished. But it is terrifyingly fast, and it's baffling that Microsoft's built-in search is this bad if something like Everything is possible.
When I have problems with Linux, I tend to have to fall back to strace or an equivalent, and I find it harder to figure out what's going on. On Solaris, if I can find the right incantations to use with dtrace, I can see where the problems are, but it's easy to get information overload.
My point is, how opaque your perspective is depends on your familiarity with the system. I have less trouble diagnosing holdups on Windows than I do on other systems. That's because I've been doing it for a long time.
It's hard to keep a thick skin about having your voice diminished when even your informative, unopinionated stuff gets shut down.
Every file on Windows keeps ~1024 bytes of info in the file cache. The more the files the more cache would be used.
Recent finding, that sped up our systems from 15->3sec on 300,000+ files filestamp check was to move from _stat to GetFileAttributesEx.
One would not think of doing such things, after all the C api is nice, open, access, bread, _stat are almost all there, but some of these functions do a lot of CPU intensive work (and one is not aware, until just a little bit of disassembly is done).
For example _stat does lots of divisions, dozen of kernel calls, strcmp, strchr, and few other things. If you have Visual Studio (or even Express) the CRT source code is included for one to see.
access() for example is relatively "fast", in terms that there is just mutex (I think, I'm on OSX right now), and then calling GetFileAttributes.
And back to RamMap - it's very useful in the sense that it shows you which files are in the cache, and what portion of them, also very useful that it can flush the cache, so one can run several tests, and optimize for hot-cache and cold-cache situations.
Few months ago, I came up with a scheme, borrowing idea from mozilla - where they would just pre-read certain DLLs that would eventually came up to be loaded (in a second thread).
I did the same for one our tools, the tool is single threaded, reads, operates, then writes. And it usually reads 10x more than it writes. So while the process operation was able to get multi-threaded through OpenMP, reading was not, so instead I had a list of what to read ahead, in a second thread, so that when it comes to the first thread, and it wanted to read, it was taking it from the cache. If the pre-fetcher was behind, it was skipping ahead. There was not even need to keep the contents in memory, just enough to read, and that's it.
For some other tool, where reading patterns cannot be found easily (deep-tree hierarchy) I've made something else instead - saving in a binary file what was read before for the given set of command-line arguments (filtering some). Later that was reused. It cut down on certain operations 25-50%.
One lesson I've learned though is to let the I/O do it's job from one thread... Unless everyone has some proper OS, with some proper drivers, with some proper HW, with....
Tim Bradshaws' widefinder, and widefinder2 competition had also good information. The guy that win it, has on his site some good analysis of multi-threaded I/O (can't find the site now),... But the analysis was basically that it's too erratic - sometimes you get speedups, but sometimes you get slowdowns (especially with writes).
Here is some example:
score id post
1416 1711 What is the single most influential book every programmer should read?
1409 9033 Hidden Features of C#?
1181 101268 Hidden features of Python
979 1995113 Strangest language feature
736 500607 What are the lesser known but cool data structures?
708 6163683 Cycles in family tree software
671 315911 Git for beginners: The definitive practical guide
653 662956 Most useful free .NET libraries?
597 891643 Twitter image encoding challenge
583 83073 Why not use tables for layout in HTML?
579 2349378 New programming jargon you coined?
549 621884 Database development mistakes made by application developers
537 1218390 What is your most productive shortcut with Vim?
532 309300 What makes PHP a good language?
505 1133581 Is 23,148,855,308,184,500 a magic number, or sheer chance?
488 114342 What are Code Smells? What is the best way to correct them?
481 432922 Significant new inventions in computing since 1980
479 3550556 I've found my software as cracked download on Internet, what to do?
479 380819 Common programming mistakes for .NET developers to avoid?
473 182630 jQuery Tips and Tricks
The query (also at http://data.stackexchange.com/stackoverflow/s/2305/top-close...
) - have a play. SELECT top 20
p.score, p.id, p.id as [Post Link]
FROM Posthistory h
INNER JOIN PosthistoryTypes t ON h.posthistorytypeid = t.id
INNER JOIN Posts p ON h.postid = p.id
WHERE
t.name = 'Post Closed'
GROUP BY p.score, p.id
ORDER BY p.score DESC
PGO on the other hand seems very unlikely to fail due to memory constraints, atleast I've never come across that happening, the resulting code for the profiling stage will of course be bigger since it contains profiling code but I doubt the compilation stage requires alot more memory even though it examines the generated profiling data when optimizing the final binary.
It seems weird that PGO would not work with Chromium given that it's used in Firefox (which is not exactly a small project) to give a very noticeable speed boost (remember the 'Firefox runs faster with windows Firefox binary under wine than the native Linux binary debacle'? That was back when linux Firefox builds didn't use PGO while the windows builds did.)
I also used Digital Mars for the same, but DMC sometimes fails with big builds.
I use Visual Studio now because I'm using DirectX and I just want something that works out of the box.
"But if your files are generally much larger or tend to grow over time..."
No numbers are given in what you quoted, but when I think about "much larger" I'm thinking of file sizes in the megabyte to gigabyte range, such as a dedicated data drive for a media server or a database server. A 4KB median file size doesn't fit my mental model of "much larger".I don't assume, know or care to know anything about Windows Metro. It's not replacing the desktop system that I use.