Menu directory status & updates copyrights help

link lost warning

The original [webPage, URL] is no longer available. You might be able to find it by downloading a directory listing of [all, part] of my website, then searching through that for potential candidates for the link. I don't really have a [nice, simple] way yet, but here are two approaches.



browser directory listing (iterative)

The simplest non-technical way to capture a directory listing is via iteratively using your browser to go through my website:
  1. browser-list http://www.BillHowell.ca/
  2. text capture the full listing :
  3. click on one of the sub-directories listed by the browser, for example "Neural nets"
    (which is http://www.BillHowell.ca/Neural nets/)
  4. repeat step 2
  5. continue as desired via a [depth, breadth]-first recurse through the directory trees to capture [all, part] of my webSite [dir, fil]s listing.
This is very simple, but ugly and your-time-consuming. You will likely die of boredom long before you get a full listing.

Now you will want to find likely candidate [dir, fil]s be searching through your file of captured. It might be easiest to convert your text file listing of my webSite [dir, fil]s using a bash script. I'd rather not do that now, as it would be useless for me in the future, and you would be far better served if I make sure better tools, such as [curl, lftp, wget] as listed in the next section, work for you. Trust me, the computer utilities will do vastly superior work than you could ever hope to do fo simple stuff like this.



use of [curl, lftp, wget] ftp tools (don't work for now)

A vastly superior way to proceed is to use the "oldie-but-goldie" tools that have been available since probably even pre-internet days. The following won't work (for me anyways) without a webSite password, but I'm hopeful I can change things of fine the right tool.

For example, targeting the online directory of my videos :
$ curl --list-only 'http://www.BillHowell.ca/Bill Howells videos/'
This generates the NON-RECURSIVE listing : (nyet - doesn't work for now)

To recursively list sub-directories :
$ curl --list-only -r 'http://www.BillHowell.ca/Bill Howells videos/'
This generates the RECURSIVE listing : (nyet - doesn't work for now)

To recursively list my entire webSite, this is a [long time, large] process, so the output is redirected to a file :
$ curl --list-only -r 'http://www.BillHowell.ca/' >"$d_temp"'curl Howell webSite dirList.txt'
Put your own intended output path in place of "$d_temp"'curl Howell webSite dirList.txt'.
(nyet - doesn't work for now)