Quoting Charilaos Skiadas skiadas@hanover.edu:
This was my first approach to this too. The problem is that this is relatively slow, because it has to load R each time, and that does take up time. The interpreter sounds like a better idea, but it doesn't work out of the box. (well, I guess we don't even know if we can make it work at all yet ;) ). There is an approach that is relatively doable but had technical problems. We can start an R process in the background, and communicate to it via named pipes, which you can think of if you like as files on the hard drive that TM would write to and R would read from. This would be reasonably fast. The problem we are encountering, so to speak, is that this would mean a shared R environment for all your R work. So imagine you are working on three different R projects, on different R windows. They might be defining conflicting varialbes and messing up each other's computations, if they are sent to the same R process. So this adds a considerable amount of details that need to be overcome.
Well, I tried it to communicate with R via named pipes. It works quite good but one thing is tricky.
I created a fifo r_pipe
Then I started R via
R --slave <r_pipe >result.txt
After that I wrote in an other bash shell:
echo "2+3" >r_pipe
OK. In result.txt you can see [1] 5. But my R job was cancelled. I didn't find out a way to avoid this canceling.
Then I checked the following hack. I call 'R --slave <r_pipe >result.txt' in a loop. Before R is cancelled I saved the workspace and reload that workspace with the new start. This works but if you have a large workspace it takes time.
The point is: how to avoid cancelling R??
My approach would be to start a new R session which is bounced to a named pipe. By doing so you could write in TM the line
3+4 >r_pipe17
for the 17th session I started.
Any ideas?
@hadley: Of course 'R --slave ...' simplifes the script.
-Hans
---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.