Problem listing repository files using DIR http(s)://...


Nov 9, 2016
Hi all.

I'm writing BTMs to download new versions of specific software automatically by reading the repository content directly, then feeding the latest release to WGET.
If target is a real FTP back-end (URI = FTP or FTPS:), then no problem. I can do the following and easily parse the result :
DIR /B*core3sdsv5i64.exe



Works a treat.

But if when the URI is HTTP or HTTPS, then I get nothing. Regardless whether the URL contains only dirs or dirs + files :

Directory of*

1 01 1601 0.00 0
0 bytes in 1 file and 0 dirs

Total for:*
0 bytes in 1 file and 0 dirs

The same URL entered in my browser gives me a list of almost 1000 lines.

So for now I do TYPE <url>, which downloads the page's HTML code, which I then have to comb through for the latest release.
Pain in the ass!

I've tried with exotic variations such as "ftp://...:80" and "https://...:21" but found no valid work-around.

Isn't there a way to get the same result with DIR /B http(s)://... as with DIR /B ftp(s)://... ?
I hope so.

Last edited:
@Joe :

Of course I know I can achieve my goal in Powershell or some other scripting environment, but I like TCC because it's no-nosense and versatile.
Besides, this is a JPsoft forum. It seems misplaced to suggest using other software as an answer to my question.

Whatever http(s) shows you, it's definitely not a directory. It's just text displayed however the page's author wanted it displayed.

That said, TCC can help you comb through the page. For example,
COPY http://site/webpage
TPIPE /input=webpage /simple=16

That will remove the HTML tags. TPIPE has many more features that will allow you to extract and manipulate data from a file.
If I open your link in a browser, I get a 404 error because of the asterisk. If I remove the asterisk, I get the list of files.
Keep in mind that "wget -O filename -- URL" literally equals "wget -O - -- URL > filename", which in turn defeats the most useful feature of wget - server timestamping.

Similar threads