On 2007/03/29, at 22:27, Rob McBroom wrote:
Yes, this is exactly the process I follow. The issue here is that the "test" part can't happen until the files on the web server have been updated. Ergo, Subversion is not the right way to update those files because that would put "checkin" before "test".
To be more explicit, I keep the working copy on the web server rather than on my local machine and use some kind of remote filesystem to access it. I don't generally have local copies of any of these files. Perhaps that will clear up some confusion.
Why don't you use rsync or scp for synchronization with your web server? Check-out, modify, sync, test, modify, sync, test, check-in. Then on production web server you just want to update.
On Mar 29, 2007, at 4:38 PM, Charilaos Skiadas wrote:
2) I would think that it is particularly important that you have a subversion system exactly when you do web stuff, where you need to check things on the actual server often. With a subversion system, you can update the server after you've committed the changes, and then if there are problems simply revert to the previous, known-to-be-working, version until you've worked out what went wrong.
OK, but whether I commit my new crappy changes or not, the old "known good" version was previously committed and I can therefore revert to it, so I'm not sure what you're getting at. When I think changes are good, I commit them. If subsequent changes were a horrible mistake, I can revert. If I committed every single change, as you seem to be suggesting, then I could revert to previous good versions and previous bad versions, but why would I ever want to do the latter?
My personal opinion: just commit things that actually *work*. Never, but really, never commit undone things or things that fail compiling or stuff like that. If you are using Perl, first of all check if it don't have syntax errors :-D