2006-04-25 04:24 pm (UTC)
You can do GM_xmlhttpRequest, a wrapper around the regular call, from inside greasemonkey, so you could build something that would talk to LJ via either the browser or client interfaces.
I have no idea whether the client protocol is particularly useful for building your own friends list variants, though.
Yeah, the more I think about it, the more I think I don't want greasemonkey.
I am actually currently playing with Habari Xenu, a Firefox extension RSS reader. It does log in to LJ appropriately already, and I like most of how it displays posts. I'd need to add functionality to use the LJ API to read LJ accounts -- right now it uses RSS, and one of the things I want to work around is the way that many LJ RSS feeds currently just display title only, no content.
I once had plans for something similar: a tool to let you easily read someones entire journal. The basic design was for the tool to login to LJ, then screenscrape the archive and talkread pages and reformat them, giving the appearance of a continuous journal (rather than the usual lastn layout). It would probably be possible to do something similar for the friends page, though you'd need some programming experience to do so.
I don't think the current client protocol has anything in it for reading friends lists, and the protocol for archiving journals only works for the journal's owner.
I was imagining it would work like this:
1. Request friends list for logged-in user
2. For each user on list, request "all recent posts" (I believe I saw this in the API)
3. Indicate which users have new posts
4. When the user chooses a friend, display those posts
I don't know XUL but I do know XML and Java -- "how hard can it be?" :)
To get the recent posts, you could use the per-user rss feeds (I think they live somewhere around user.livejournal.com/data/rss). Add support for 304 (HTTP not modified), and away you go!
Unfortunately one of the things I most want to work around (which I should have mentioned in my original post) is that many LJ users lately have configured their RSS feeds to contain titles only, no content. I specifically want to get the data direct from LJ so that I can get all the data! (And hopefully have a chance to handle cut tags -- not sure yet how that will work.)
Hmm. You should be able to get the itemids out of the RSS feed (the permalink). Use those to grab the actual entry, and grab it with the lynx style (which IIRC nukes just about all the custom layouts and leaves you with a very basic layout, plus the actual entry).
Clever! Yes! That's very helpful, thanks.
Isn't this something you could do in S2?
What's S2? (Googling got me lots of irrelevant hits.)
I mean, a livejournal S2 style. It's really quite a full-blown programming environment, believe it or not.
Hmm. Can it allow me to do the following?
* Not show me posts that I've already seen (default friendslist just shows me a bunch of recent things even if I've already read them)
* Let me mark a post that I've already seen as "save" and continue to show it to me until I unmark it
* Update by itself, without my having to hit reload
* Have two frames, one with a set of folders which I can open to show a list of friends that I've sorted into that folder, one with just the content of the post I'm currently viewing
? If so then that does seem to be the way to go. Maybe I should go read doc on that...
After browsing S2 doc, it seems that it should almost be able to do what I want -- but I don't see whether it can save information between sessions; i.e., if I want to save a list of posts that I've already read so as not to be offered them again, where do I put that information? I thought of using cookies (even though that seems inelegant), but so far as I can tell, S2 isn't able to set cookies.
I think the reloading thing is just doable as an HTML refresh directive.
2006-04-29 07:34 pm (UTC)
some ways to do it, with varying degrees of slickness vs. simplicity
1. You could just suck the data from your friends' RSS feeds using a simple script in Ruby or Python, which you feed an OPML document you export from bloglines into and trim to have only the feeds you want to process this way. When you get to the item level, chase the link and scrape the portion of the data you want from the page. The output could be something really simple like a .html file, which you just load/read into your browser. If it always gets dumped to the file, you could book mark the file: URL for that file in your browser - or, simply double-click it on your desktop each time and not bother with bookmark.
3. Write an XSLT script to do it. You would have to run it from command line though. XSLT in the browser is purposely fettered to prevent it from running off to websites other than the one it was loaded from. LiveJournal is using XHTML and hopefully it is well-formed XML. RSS and OPML are XML as well. So an XSLT script should be pretty well-suited to this task.
4. Use the portal feature of Cocoon to do essentially the same thing as solution #2 but using a lot of declarative HTML rather than procedural scripting language code.
5. You could write a Java program to do it. Java has good support for XML - as well as XSLT - and it is capable of displaying HTML using certain text-components in the Swing GUI framework that comes with it.
6. You could write a Java applet to do it, though you would probably have to sign it, since it would not be loading from the friends' sites which it would be reading content from. All the friends URLs would point to the same domain but I think they are on different hosts. If not, perhaps it would not have to be signed; could always try and see.
2006-04-29 11:17 pm (UTC)
some programming tips/resources for reading/reprocessing blogs from feeds
Take a look at this list of blog feed-reading tips
. It might help give you some ideas if you want to write a quick-and-dirty Ruby script to do it for you.
Seems like they were wrestling with the same problem over there.
2006-05-01 01:39 pm (UTC)
Re: some programming tips/resources for reading/reprocessing blogs from feeds
"They" are me. Thanks for all your suggestions!