7
Quote:
DonXoop wrote:
I think that you should be able to snapshot a site to a static copy similar to this.
Me too, making this all the more frustrating
Quote:
Aren't the denied errors coming from hitting links that anonymous isn't allowed? ...
Good question. No, the only thing anonymous is allowed to see on this site is the login block and a static block (that says you have to register and log in to see anything else). The wget process pulls the main "inside" page beautifully, but when it tries to hit links off of that it gets the errors.
Quote:
If you want a complete copy you might need to config the rights for anon to follow the links that you want to archive. You can turn them off when done.
Hrmm ... I had thought of that, but in the future and I am definitely going to want to do this with other projects, so I should find a solution now while the pressure is low. I've always done this with hand-rolled HTML sites, just copy the directory structure to a CD! :)
Quote:
I don't think you want to have the session id in the urls either. But I guess you you are using that to simulate a logged in user?
I am trying to simulate a logged-in user, I don't particularly care if the urls are pretty or not ... though I guess it would be nicer if they were. That's probably more of an XOOPS site admin setting than a http-fetching mirrorer problem, eh?
Thanks for the reply...