Question

I'm using the StreamReader class in .NET like this:

using( StreamReader reader = new StreamReader( "c:\somefile.html", true ) {
    string filetext = reader.ReadToEnd();
}

This works fine when the file has a BOM. I ran into trouble with a file with no BOM .. basically I got gibberish. When I specified Encoding.Unicode it worked fine, eg:

using( StreamReader reader = new StreamReader( "c:\somefile.html", Encoding.Unicode, false ) {
    string filetext = reader.ReadToEnd();
}

So, I need to get the file contents into a string. So how do people usually handle this? I know there's no solution that will work 100% of the time, but I'd like to improve my odds .. there is obviously software out there that tries to guess (eg, notepad, browsers, etc). Is there a method in the .NET framework that will guess for me? Does anyone have some code they'd like to share?

More background: This question is pretty much the same as mine, but I'm in .NET land. That question led me to a blog listing various encoding detection libraries, but none are in .NET

OTHER TIPS

You should read this article by Raymond Chen. He goes into detail on how programs can guess what an encoding is (and some of the fun that comes from guessing)

http://blogs.msdn.com/oldnewthing/archive/2004/03/24/95235.aspx

I had good luck with Pude, a C# port of Mozilla Universal Charset Detector.

UTF-8 is designed in a way that it is unlikely to have a text encoded in an arbitrary 8bit-encoding like latin1 being decoded to proper unicode using UTF-8.

So the minimum approach is this (pseudocode, I don't talk .NET):

try: u = some_text.decode("UTF-8") except UnicodeDecodeError: u = some_text.decode("most-likely-encoding")

For the most-likely-encoding one usually uses e.g. latin1 or cp1252 or whatever. More sophisticated approaches might try & find language-specific character pairings, but I'm not aware of something that does that as a library or some such.

I used this to do something similar a while back:

http://www.conceptdevelopment.net/Localization/NCharDet/

Use Win32's IsTextUnicode.

In the general sense, it is a difficult promlem. See: http://blogs.msdn.com/oldnewthing/archive/2007/04/17/2158334.aspx.

A hacky technique might be to take an MD5 of the text, then decode the text and re-encode it in various encodings, MD5'ing each one. If one matches you guess it's that encoding.

That's obviously too slow for something that handles a lot of files but for something like a text editor I could see it working.

Other than that, it'll be hands dirty porting the java libraries from this post that came from the Delphi SO question, or using the IE MLang feature.

See my (recent) answer to this (as far as I can tell, equivalent) question: How can I detect the encoding/codepage of a text file

It does NOT attempt to guess across a range of possible "national" encodings like MLang and NCharDet do, but rather assumes you know what kind of non-unicode files you're likely to encounter. As far as I can tell from your question, it should address your problem pretty reliably (without relying on the "black box" of MLang).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top