[TxMt] slow loading of large html/xml documents
Hans-Joerg Bibiko
bibiko at eva.mpg.de
Thu Jul 5 15:28:17 UTC 2007
On 5 Jul 2007, at 16:32, Thomas Aylott (subtleGradient) wrote:
> On Jul 5, 2007, at 6:30 AM, Tobias Jung wrote:
>
>> At 12:03 Uhr -0400 03.07.2007, Thomas Aylott (subtleGradient) wrote:
>>> I regularly have to deal with html documents that have been
>>> stripped of all newlines.
>>
>> Well, maybe this isn't the kind of solution you're looking for,
>> but...
Here is my solution of the same kind ;)
If I have such a huge one line xml/html document I use the "Filter
through Command'
Command: cat myhugefile.xml | perl -pe 's/<\/(.*?)>/<\/$1>\n/g'
Input: None
Output: Create New Document
By doing so I can 'handle' html files with a size of some MegaBytes
with TM.
One could fine-tune it to insert a \n after let's say 4 closing tags.
Or one could also add s/<br>/<br>\n/g or something like that.
If I want to sustain the original structure I use:
Command: cat myhugefile.xml | perl -pe 's/<\/(.*?)>/<\/$1>§§\n/g'
Input: None
Output: Create New Document
in order to distinguish between a 'original' line break and 'my
temporary' line break.
Thus I can delete all §§\n after my editing easily by using perl.
####
I know maybe you did it but only in case you didn't: UNCHECK!! ->
Check Spelling As You Type
####
I don't know whether it would increase the speed of TM in such a case
if it would be possible to switch off the undo buffer/function?
Hans
More information about the textmate
mailing list