From ono@java.pl Wed Feb 27 12:47:32 2008 From: Adam Strzelecki To: textmate@lists.macromates.com Subject: Re: [TxMt] TM tokenizer is taking 100% CPU for long while using 100-200KB text files with single line Date: Wed, 27 Feb 2008 13:47:23 +0100 Message-ID: In-Reply-To: <1F3297FA-2D2E-4965-8933-924934EDD4CD@eva.mpg.de> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============5771939409398099180==" --===============5771939409398099180== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit > If I do this test.html pops up in a second. > My settings are Soft wrap is On AND! Check Spelling as you type is > OFF. > But then you get into troubles while editing. This is exactly what I'm trying to emphasize. When I use: $ perl -e '{print "" x 20000}' > test.html File loads immediately but TM takes 100% CPU for few minutes then syntax highlight appears. When I use: $ perl -e '{print "\n" x 20000}' > test.html (Note the \n) File loads immediately together with syntax highlight, no 100% CPU. So I think there's definitely something wrong with syntax highlight (tokenizer). I remember compiler & parser construction lessons on my university, and the lexer & parser performance shouldn't matter of line breaks. Regards, -- Adam Strzelecki |: nanoant.com :| --===============5771939409398099180==--