Annotated source code for Second Life flickr screen

June 19th, 2006  |  Published in misc

I’ve had quite a few requests for the source code of the Flickr screen for Second Life that I wrote about a few weeks ago. Here’s the code, annotated with links to the Second Life coding wiki and a few notes.

I’m only publishing the LSL code, because the serverside code isn’t very interesting and does pretty much exactly what it says in the previous post. If you want to run your own flickr screen, the URLs in the code below should work just as well for your objects as they do for mine. If you have problems, let me know. I reserve the right to switch off the code if the traffic gets too high, but I’ll post here if I have to do that.

UPDATE: Sorry, but I no longer have the server capacity to run the backend for this service. I’m told that new features in the Second Life Viewer mean that you can achieve the same thing now with Shared Media.

The code uses SL’s streaming media feature to load the jpeg into a texture. This comes with a number of restrictions: ‘You are allowed one movie (or “media” resource) per land parcel. The movie will be played by replacing a texture on an object with the movie. Users will only see the movie when they are standing on your land parcel. Otherwise they will see the static texture. Script functions only work for objects owned by the land owner or deeded to the group that owns the land. (Remember to set asset permissions on your script and object as well as sharing it with the group!)’. I’m hoping for much better media support than this in future Second Life versions.

// A place to remember the ID for the latest http request we made, so the callback doesn't process out-of-order responses
key http_id;
// Test pattern - Used as default video texture when one is missing in parcel media
key VIDEO_DEFAULT = "6e0f05ad-1809-4edc-df29-fae3d2a6c9b8";
// Set the texture to the jpeg provided by url
seturl(string url)
key video_texture = llList2Key(llParcelMediaQuery( [PARCEL_MEDIA_COMMAND_TEXTURE]), 0);
if(video_texture == NULL_KEY)
video_texture = VIDEO_DEFAULT;
// set a default jpeg
// start listening for nearby speech
// start sensing nearby agents once a minute
sensor(integer num_detected)
integer i;
for (i = 0; i < num_detected; i++)
// ping the server so we know who's around
string url = ""+llDetectedName(i);
// but don't record the request ID, because we don't need
// http_response to care about this request
listen(integer channel, string name, key id, string message)
llSay(0,"Setting the tag for "+name+" to "+message);
string url = ""+name+"&tag="+message;
touch_start(integer total_number)
// ask the server for a jpeg appropriate to the agent who touched us
string url = ""+llDetectedName(0);
llSetText("Finding a picture for "+llDetectedName(0),<0,0,1>,1);
http_id = llHTTPRequest(url,[],"");
http_response(key request_id, integer status, list metadata, string body) {
// make sure we're processing a response we care about
if(request_id == http_id) {
// only on request success
if(status == 200) {
// data is coming back as pipe-delimited
list data = llParseString2List(body,["|"],[]);
// url is the first field
string url = llList2String(data,0);
if(url == "UNKNOWN") {
llWhisper(0,"I don't know what kind of picture to show you.
Type '/1 sometag' to tell me what tag to search for on flickr and I'll remember it.");
} else {
// name is the second field
string name = llList2String(data,1);
llSetText("Showing a picture for "+name,<0,0,1>,0.5);

Comments are closed.