Sunday, March 11, 2012

Using a photo frame as second monitor [Updated]

Some computing scenarios would benefit from a second monitor, even a small one. A good example are HTPC, where info like what music, radio station, or TV channel is played is often shown on little 2-line LCD devices.

Photo frames offer so much more space - and at much higher resolution than even the biggest LCD devices - to display information. If one only could write to them.

The Samsung SPF-87H Digital Photo Frame is such a device. It can be connected to a computer and switched into a so called Mini-Monitor mode, allowing the computer to write to the frame. Samsung is offering a program called 'Frame Manager' for Windows, but nothing for Linux. Some attempts for Linux functionality have been made already, like here, here, and discussion here.

I am now offering a Python script, which can lock the frame into Mini-Monitor mode and send pictures to the frame. The script is very simple, has basically no error checking, but is heavily commented. It provides the basic functionality only, e.g. pictures must be pre-sized to what the frame can handle (800x480 pixel, width x height). To use the script, copy the content of the post pyframe_basic into a file pyframe_basic and make it executable (chmod a+x pyframe_basic).

An advanced version - not shown yet - will use the Python Imaging Library (PIL) to process pictures of any size and type to fit with the frame requirements, and could prepare pictures with textual information.

The photo frame unfortunately does not allow auto-connection. Go through these steps for manual connection:
  • Connect frame to computer with USB cable
  • Switch on the frame
  • A dialogue pops up on the frame, offering Mass Storage, Mini Monitor, and Photo Frame. Select Mini Monitor and press Select
  • Your welcome picture (see program code) will be shown

UPDATE 1: transfer speed evaluated
UPDATE 2: code for switching from Mass Storage mode to Mini Storage mode added
UPDATE 3: a program to send screenshots to the photo frame at video speeds, completely from within Python
UPDATE 4: a program which allows to send screenshots upon receiving a trigger signal
UPDATE 5:  A video recorded from a photoframe, showing a video playing on the photoframe

A video showing video on the Samsung photoframe

Using the Python programs from this site I demonstrate with a video recorded by digital camera from a Samsung SPF-87H Digital Photo Frame. The quality of the video shown here on the blog is awful in color and resolution, while on the photoframe itself both are excellent. But at least this clip shows that the video plays smoothly through all scenes.


The setup used a virtual frame buffer, so this setting can also be used for in a headless client. In a terminal give these commands :

Xvfb :99 -screen 0 800x480x16 &
DISPLAY=:99 ./videoframe &
DISPLAY=:99 mplayer -fs /path/to/bbbunny_720p_h264.mov
This creates a virtual frame buffer xserver as #99 with a screen resolution the same as the photoframe (800x480, change to match your frame if needed, here and also in the script ), and in it starts the Python videoframe script (see below), and uses mplayer to play a movie in full screen mode. This was then recorded with a digital camera from the photoframe, and uploaded to this post.

The videoframe script records some frame and transfer rates. Here is an excerpt from the final scenes (each line is an average over 50 frames, i.e. 2-3 seconds):
Frames per second: 18.68, Megabytes per second: 0.84
Frames per second: 17.93, Megabytes per second: 0.88
Frames per second: 17.80, Megabytes per second: 0.87
Frames per second: 17.78, Megabytes per second: 0.87
Frames per second: 17.97, Megabytes per second: 0.88
Frames per second: 18.02, Megabytes per second: 0.89
Frames per second: 17.89, Megabytes per second: 0.88
Frames per second: 17.96, Megabytes per second: 0.88
Frames per second: 15.99, Megabytes per second: 0.96
Frames per second: 17.12, Megabytes per second: 0.89
Frames per second: 16.23, Megabytes per second: 0.96
Frames per second: 16.66, Megabytes per second: 0.94
Frames per second: 17.87, Megabytes per second: 0.88
Frames per second: 17.79, Megabytes per second: 0.90
Frames per second: 16.01, Megabytes per second: 0.98
Frames per second: 19.45, Megabytes per second: 0.80
Frames per second: 22.04, Megabytes per second: 0.69
Frames per second: 22.18, Megabytes per second: 0.68
Frames per second: 21.53, Megabytes per second: 0.71
Frames per second: 18.49, Megabytes per second: 0.88
Frames per second: 18.18, Megabytes per second: 0.88
Frames per second: 17.78, Megabytes per second: 0.90
Frames per second: 18.45, Megabytes per second: 0.83
Frames per second: 19.07, Megabytes per second: 0.83
Frames per second: 17.95, Megabytes per second: 0.88
Frames per second: 18.36, Megabytes per second: 0.85
Frames per second: 20.55, Megabytes per second: 0.74
Frames per second: 21.25, Megabytes per second: 0.70
Frames per second: 20.64, Megabytes per second: 0.74
Depending on the complexity of the picture to be jpg coded, the observed variation in fps ranges from 11 ... 27fps. In this setup the cpu is a 6 year old Intel Core2 T7200 2.0GHz, running at ~50% load (its cpu-mark is 1150; for reference: today's Intel Core i5-2500 has a cpu-mark of 6750). As noticed earlier, the bottleneck appears to be the frame itself. The speed of movements within the movie scenes does NOT play a role for the transfer rate, as always a single screenshot is taken and processed. However, fine structures (grass, hair, fur,...) which make for big jpg files slow the frame rate down.

The python code for videoframe is shown below the line. See code in other posts below for more detailed comments on parts of the script.

Update:
the command:
pmap.save(buffer, 'jpeg')
is the same as:
pmap.save(buffer, 'jpeg', quality = -1)
which sets the quality to its default setting of 75. Quality ranges from 0 (=very poor) to 100 (=very good). The save command itself is not faster at poorer settings, but the resulting picture size is smaller, and thus transfer speed over the USB bus increases, allowing higher frame rates! Quality settings of 60 are usually good enough; certainly for video.

see reference in source code:
http://cep.xor.aps.anl.gov/software/qt4-x11-4.2.2-browser/d0/d0e/qjpeghandler_8cpp-source.html#l00897
00959         int quality = sourceQuality >= 0 ? qMin(sourceQuality,100) : 75;
/Update
_________________________________________________________________________________
#!/usr/bin/python
# -*- coding: UTF-8 -*-

# Program: videoframe
#
# This videoframe program plays videos on the 'Samsung SPF-87H Digital Photo Frame'
# by taking rapid snapshots from a video playing on a screen and transfers them as jpeg
# pictures to the photo frame
#
# It is an application of the sshot2frame program found on the same
# website as this program
# Read that post to understand details not commented here
# Copyright (C) ullix

import sys
import struct
import usb.core

# additional imports are required
from PyQt4 import QtGui, QtCore
import time

device = "SPF87H Mini Monitor"
dev = usb.core.find(idVendor=0x04e8, idProduct=0x2034)

if dev is None:
    print "Could not find", device, " - using screen\n"
    frame = False
else:
    frame = True
    print "Found", device
    dev.ctrl_transfer(0xc0, 4 )  

app  = QtGui.QApplication(sys.argv)

fd = open("shot.log","a", 0)

# Enter into a loop to repeatedly take screenshots and send them to the frame
start  = time.time()
frames = 0
mbyte = 0

while True:
    # take a screenshot and store into a pixmap
    # the screen was set to 800x480, so it already matches the photoframe
    # dimensions, and no further processing is necessary
    pmap = QtGui.QPixmap.grabWindow(QtGui.QApplication.desktop().winId())
   
    # create a buffer object and store the pixmap in it as if it were a jpeg file   
    buffer = QtCore.QBuffer()
    buffer.open(QtCore.QIODevice.WriteOnly)
    pmap.save(buffer, 'jpeg')
   
    # now get the just saved "file" data into a string, which we will send to the frame
    pic = buffer.data().__str__()
   
    if not frame:
        print "no photoframe found; exiting"
        sys.exit()
    else:
        rawdata = b"\xa5\x5a\x18\x04" + struct.pack('<I', len(pic)) + b"\x48\x00\x00\x00" + pic
        pad = 16384 - (len(rawdata) % 16384)
        tdata = rawdata + pad * b'\x00'
        tdata = tdata + b'\x00'
        endpoint = 0x02
        bytes_written = dev.write(endpoint, tdata )
        mbyte += bytes_written

    frames += 1  

    # write out info every 50 frames
    if frames % 50 == 0:
        runtime = time.time() -start
        fd.write("Frames per second: {0:0.2f}, Megabytes per second: {1:0.2f}\n".format( frames / runtime, mbyte/runtime /1000000.))
        start  = time.time()
        frames = 0
        mbyte = 0
     

Triggered Screenshots

When using a photoframe as a display or a (headless) PC, one might want to update the display at regular intervals, e.g. once per minute to update a clock, but then also at other events, like pressing a key on a keyboard or remote control.

This can be achieved by making the screenshot program listen to UNIX signals. These signals must not be mistaken for signals emitted from GUIs with events like clicking a button, or checking a checkbox. The probably best known of these UNIX signals is SIGINT, which is sent to a program when CTRL-C is pressed, and usually ends the program.

For user defined purposes the signals SIGUSR1 and SIGUSR2 (numerical codes 10 and 12, resp.) have been reserved. In the shell these signals can be send by
kill -SIGUSR1 pid-of-program-to-receive-signal
The tsshot2frame program below will listen to this signal and take a screenshot and send it to the frame when it receives it.

The triggershot program below is just a demo to show how a program can create and send such a signal. In this case the program does it when the key 't' is pressed. Obviously, other events can be used, like key presses on remote control, alarm signals from sensors, etc.
______________________________________________________________________________
#!/usr/bin/python
# -*- coding: UTF-8 -*-

# Program: tsshot2frame
# based on sshot2frame, but allows to be triggered by a SIGUSR1 signal
#
# This triggered-screenshot-to-frame program takes a screenshot from your desktop
# and sends it to the 'Samsung SPF-87H Digital Photo Frame'
#
# The screenshots are taken at regular intervals, but can also be triggered randomly
# by a SIGUSR1 signal, to which this program is listening.
#
# It is an extension of the sshot2frame program found here:
#    http://pyframe.blogspot.com
# Read other posts to understand details not commented here
# Copyright (C) ullix

import sys
import struct
import usb.core
import time
import signal
from PyQt4 import QtGui, QtCore


def takeshot():
    print "tsshot2frame: taking a shot"

    # take a screenshot and store into a pixmap
    #pmap = QtGui.QPixmap.grabWindow(QtGui.QApplication.desktop().winId())
    # if you want a screenshot from only a subset of your desktop, you can define it like this
    pmap = QtGui.QPixmap.grabWindow(QtGui.QApplication.desktop().winId(), x=0, y= 600, width=1200, height=720)

    # next code line is needed only when screenshot does not yet have the proper dimensions for the frame
    # note that distortion will result when aspect ratios of desktop and frame are different!
    # if not needed then inactivate to save cpu cycles
    pmap = pmap.scaled(800,480)

    # create a buffer object and store the pixmap in it as if it were a jpeg file
    buffer = QtCore.QBuffer()
    buffer.open(QtCore.QIODevice.WriteOnly)
    pmap.save(buffer, 'jpeg')
    buffer.close()

    # now get the just saved "file" data into a string, which we will send to the frame
    pic = buffer.data().__str__()

    # wrap pic into write format and write to frame
    rawdata = b"\xa5\x5a\x18\x04" + struct.pack('<I', len(pic)) + b"\x48\x00\x00\x00" + pic
    pad = 16384 - (len(rawdata) % 16384)
    tdata = rawdata + pad * b'\x00'
    tdata = tdata + b'\x00'
    endpoint = 0x02
    bytes_written = dev.write(endpoint, tdata )


def sigusr1_handler(signum, stack):
    """
    Dummy handler for SIGUSR1 signal.
    """
    pass
    #print "tsshot2frame: sigusr1_handler received signal no:", signum

    # Receiving a signal will interrupt the time.sleep() in the main while loop,
    # which will result in a shot being taken immediatel<. Therefor a separate
    # takeshot() is not needed here; it would result in two successive shots
    # being taken
    #takeshot()


#----- main starts here ------------------------------

device = "SPF87H Mini Monitor"
dev = usb.core.find(idVendor=0x04e8, idProduct=0x2034)

if dev is None:
    print "tsshot2frame: Could not find device", device, " - exiting\n"
    sys.exit()
else:
    print "tsshot2frame: Found device", device
    dev.ctrl_transfer(0xc0, 4 )

# Setting the signal handler
signal.signal(signal.SIGUSR1, sigusr1_handler)

# Must have a QApplication running to use the other pyqt4 functions
app  = QtGui.QApplication(sys.argv)

# Take screenshots in regular intervals and send them to the frame;
# screenshots triggered by SIGNALS will come in addition
while True:
    print time.time(),
    takeshot()
    time.sleep(60)
    """
    Remember that receiving a SIGNAL will interrupt time.sleep !
    From the python documentation:
    time.sleep(secs)
    Suspend execution for the given number of seconds. The argument may be a
    floating point number to indicate a more precise sleep time. The actual
    suspension time may be less than that requested because any caught signal
    will terminate the sleep() following execution of that signal’s catching
    routine. Also, the suspension time may be longer than requested by an
    arbitrary amount because of the scheduling of other activity in the system.
    """
Following is the triggershot program:
_______________________________________________________________________________
#!/usr/bin/python
# -*- coding: UTF-8 -*-

# Program: triggershot
# sends the SIGUSR1 signal (numerical value 10) to the
# script tsshot2frame when keypress detected
# Copyright (C) ullix

import time
import signal
import os
import sys
import subprocess
import pygame
import termios
import fcntl

from PyQt4 import QtGui, QtCore


def triggersignal():
    """
    find the pid of our triggered-screen-shot program and send a
    SIGUSR1 to it
    """
    script = "tsshot2frame"

    print time.time(),"trigger: sending SIGUSR1 to ", script

    # execute shell command 'ps -A | grep tsshot2frame' and obtain its output
    p1 = subprocess.Popen(["ps", "-A"], stdout=subprocess.PIPE)
    p2 = subprocess.Popen(["grep", script], stdin=p1.stdout, stdout=subprocess.PIPE)
    output = p2.communicate()[0]
    #print "pipe outsub=",output

    if script in output and '<defunct>' not in output:
        pid = int(output[0:5])
        #print script + " is running, pid: ", pid

    else:
        if '<defunct>' in output:
            #print script + " running but defunct, clear up first"
            os.system("killall " + script ) # clear up if defunct
        else:
            #print script + " not running"
            pass

        pid = subprocess.Popen("./" + script  ).pid
        #pid = subprocess.Popen(script).pid # if script is in path
        time.sleep(2) # give it time to start
        #print script + " restarted, pid: ", pid

    os.kill(pid, signal.SIGUSR1)


def getch():
    # code according to:
    # http://docs.python.org/faq/library#how-do-i-get-a-single-keypress-at-a-time
    fd = sys.stdin.fileno()
    oldterm = termios.tcgetattr(fd)
    newattr = termios.tcgetattr(fd)
    newattr[3] = newattr[3] & ~termios.ICANON & ~termios.ECHO
    termios.tcsetattr(fd, termios.TCSANOW, newattr)

    oldflags = fcntl.fcntl(fd, fcntl.F_GETFL)
    fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)

    c = ""
    try:
        while True:
            # read from stdin as long as there are characters to be read
            # if all read then return
            try:
                c += sys.stdin.read(1)
            except IOError as (errno, msg):
                #print "IOError", errno, msg,
                break
    finally:
        # restore old settings
        termios.tcsetattr(fd, termios.TCSAFLUSH, oldterm)
        fcntl.fcntl(fd, fcntl.F_SETFL, oldflags)

    return c

#----- main starts here ------------------------------

triggersignal()
while True:
    time.sleep(0.3)
    c = getch()
    if "t" in c :
        print "Read character t, triggering screenshot"
        triggersignal()

Monday, March 5, 2012

sshot2frame - send screenshots to photoframe at video speed

#!/usr/bin/python
# -*- coding: UTF-8 -*-

# Program: sshot2frame
#
# This screenshot-to-frame program takes a screenshot from your desktop
# and sends it to the  'Samsung SPF-87H Digital Photo Frame'
#
# This can be done at frame rates of 20+ fps so that it is even possible
# to watch video on the frame, when video is playing on the desktop!
# (tested with mythtv)
#
# It is an extension of the pyframe_basic program found here:
#    http://pyframe.blogspot.com/2011/12/pyframebasic-program_15.html
# Read that post to understand details not commented here
# Copyright (C) ullix

import sys
import struct
import usb.core

# additional imports are required
from PyQt4 import QtGui, QtCore
import Image
import StringIO
import time

device = "SPF87H Mini Monitor"
dev = usb.core.find(idVendor=0x04e8, idProduct=0x2034)

if dev is None:
    print "Could not find", device, " - using screen\n"
    frame = False
else:
    frame = True
    print "Found", device
    dev.ctrl_transfer(0xc0, 4 )  


# Must have a QApplication running to use the other pyqt4 functions
app  = QtGui.QApplication(sys.argv)

# Enter into a loop to repeatedly take screenshots and send them to the frame
start  = time.time()
frames = 0
while True:
    # take a screenshot and store into a pixmap
    pmap = QtGui.QPixmap.grabWindow(QtGui.QApplication.desktop().winId())
   
    # if you want a screenshot from only a subset of your desktop, you can define it like this
    #pmap = QtGui.QPixmap.grabWindow(QtGui.QApplication.desktop().winId(), x=0, y= 600, width=800, height=480)

    # next line is needed only when screenshot does not yet have the proper dimensions for the frame
    # note that distortion will result when aspect ratios of desktop and frame are different!
    # if not needed then inactivate to save cpu cycles
    pmap = pmap.scaled(800,480)
   
    # if desired, save the pixmap into a jpg file on disk. Not required here
    #pmap.save(filename , 'jpeg')

    # create a buffer object and store the pixmap in it as if it were a jpeg file   
    buffer = QtCore.QBuffer()
    buffer.open(QtCore.QIODevice.WriteOnly)
    pmap.save(buffer, 'jpeg')
   
    # now get the just saved "file" data into a string, which we will send to the frame
    pic = buffer.data().__str__()
   
    ######################   
    # this code within ########## is needed only to create an PIL Image object to be shown below
    # by image.show(), e.g. for debugging purposes when no frame is present
   
    #picfile = StringIO.StringIO(pic)            # stringIO creates a file in memory
    #im1=Image.open(picfile)   
    #im = im1.resize((800,480), Image.ANTIALIAS) # resizing not needed when screenshot already has the right size
                                                 # note that distortion will result when aspect ratios of desktop
                                                 # and frame are different!
    #picfile.close()
    ######################
   
    if not frame:
        # remember to activate above ########### lines if you use im.show() command
        im.show()       
    else:
        rawdata = b"\xa5\x5a\x18\x04" + struct.pack('<I', len(pic)) + b"\x48\x00\x00\x00" + pic
        pad = 16384 - (len(rawdata) % 16384)
        tdata = rawdata + pad * b'\x00'
        tdata = tdata + b'\x00'
        endpoint = 0x02
        bytes_written = dev.write(endpoint, tdata )

    frames += 1  

    # exit the while loop after some cycles, or remove code to get indefinite loop
    if frames > 100:
        break;

    # set time delay between screenshots in seconds. The frame can handle some 20+fps,
    # so 0.1sec (i.e. max of 10fps) is ok for the frame but possibly too fast for a slow cpu
    #time.sleep(0.1)
   
runtime = time.time() -start
print "Frames per second: {0:0.2f}".format( frames / runtime)