Sean O'Donnells Weblog
I have an original pyboard from the MicroPython Kickstarter, but never got around to doing much with it. Lately I've been fiddling around trying to make a Wifi to Infrared bridge, so I can control my TV from my computers/phones. Unfortunately the pyboard does not come with Wifi, and addons seem to be at least $25 and often far more. A D1 mini on the other hand is $4 and competely replaces the pyboard for an application like this.
Compared to the pyboard the D1 mini is slower (default clock speed is about half a pyboard, but it can be overclocked to come close), and if Wifi is running, you only have about 36Kb of memory available for your own application. It supports 2.4 GHz Wi-Fi (802.11 b/g/n, supporting WPA/WPA2) and has a built in antenna. It also has 4 Mbytes of Flash storage built in, compared to the pyboards 1 Mbyte.
If you are shopping for a D1 mini you will also come accross the D1 mini Pro. It comes with 16 MBytes of flash, is smaller and lighter than a D1 mini and has an external antenna connector as well as a built in antenna. The only catch right now is that Micropython only detects 1MB of storage. If you can live with that until Micropython supports it fully, its probably a better buy and only costs $1 more. (Check this issue to see if the full 16 MBytes has support yet).
If you want something with slightly better support and documentation, take a lok at the Adafruit Feather HUZZAH. At $15 its much more expensive than a D1 mini. But still a lot cheaper than a pyboard and an adaptor. Adafruit has extensive documentation and tutorials on their site. These are worth a read even if you do go with a D1 mini.
The D1 mini usually comes with Arduino or NodeMCU preinstalled. So to use it you have to either find prebuilt firmware, or build it yourself. I found several tutorials, but all either assumed a slightly different operating system, or skipped steps that caused me a lot of frustration. What follows are build and installation instructions assuming you are running Ubuntu 16.04 (Xenial).
In order to connect to your D1 mini, you will need your user to be a member of the dialout group. Run the following command:
sudo addgroup $USER dialout
and then log out and log in again
If you are happy to use prebuilt firmware, you can find it here. You are looking for the ESP8266 section. Then skip to the Installation Instructions below.
First we need to make sure we have all the packages and libraries we will need installed:
sudo apt-get install gperf bison flex help2man libncurses5-dev make autoconf texinfo libtool libtool-bin g++ python unzip python-serial git screen make
Now clone the ESP SDK (All the instructions from here on in assume you start in your home directory, if you want to do it elsewhere, modify the commands to match)
cd ~ git clone --recursive https://github.com/pfalcon/esp-open-sdk.git
Now lets build the SDK, this took about 20 minutes on my laptop
cd esp-open-sdk/ make STANDALONE=y
Now the SDK is ready to use, lets put in on our path.
You will need to run that command any time you open a new terminal and want to use the SDK.
Now we are ready to build Micropython, lets check it out from github.
cd ~ git clone https://github.com/micropython/micropython.git cd micropython git submodule update --init
And finally, build our firmware
make -C mpy-cross cd esp8266/ make axtls make
And now we should our firmware in a file called firmware-combined.bin in the build directory.
Find the port your board shows up on, if you dont know, the easiest way is to run
then plug in your MicroPython board and run
again. Compare the two lists and find the entry that appears after the board is plugged in.
Mine shows up as /dev/ttyUSB0
Before we install MicroPython, its best to erase the current contents of the flash drive.
esptool.py --port /dev/ttyUSB0 erase_flash
If this refuses to write, check your user is a member of the dialout group and if not add them as shown above.
Now we write our firmware file to the board.
cd $HOME/micropython/esp8266 esptool.py -p /dev/ttyUSB0 write_flash 0x0000000 ./build/firmware-combined.bin
Press the little reset button on the side of the board. The board will reboot, the blue LED on the side will blink briefly and MicroPython should be running.
We can use screen to connect to the board
screen /dev/ttyUSB0 115200
You should now see a python REPL!
Lets try something simple
>>> 1 + 1 2 >>>
Hurray, we have Python running on this tiny computer!
The D1 Mini has a blue LED built in, lets blink it!
>>> from machine import Pin >>> p2 = Pin(2, Pin.OUT) >>> p2.high() >>> p2.low() >>> p2.high() >>> p2.low()
As you switch from high to low, you should see the LED turn on and off. But the real point of this board is the Wifi support, lets connect to a Wifi network.
>>> import network >>> wifi = network.WLAN(network.STA_IF) >>> wifi.active(True) >>> wifi.connect('your-ssid', 'your-password')
Nothing too exciting here, but lets make a HTTP request
>>> import usocket as socket >>> s = socket.socket() >>> address = socket.getaddrinfo("google.com", 80) >>> print("Address:", address) >>> connect_address = address[-1] >>> print("Connect address:", addr) >>> s.connect(connect_address) >>> s.send(b"GET / HTTP/1.0\n\n") >>> while True: >>> data = s.recv(4096) >>> if data: >>> print(str(data, 'utf8'), end='') >>> else: >>> break
You should see the HTML for the google homepage come back. Congratulations! Your tiny computer is connected to the internet.
I'll try and follow this up with more details on the Wifi -> Infrared work I'm doing.
The sad news that Yahoo plans to shut down del.icio.us reached me this week (although theres still hope). I use del.icio.us pretty much every day and was a little traumatized upon hearing this. Once I had finished wailing and gnashing my teeth I set out looking for somewhere to go.
There are many bookmarking sites/services out there, but I fear change, and pinboard.in seemed like the closest thing to a plain replacement. It even supports the same API as del.icio.us. Theres a small charge for signing up, but no recurring fee, so I broke out the credit card and joined up.
The next step was to figure out how to migrate my bookmarks. del.icio.us provides a export to HTML feature in its settings area, but a quick look at the export revealed some data was missing (mostly extended descriptions). Rabid googling revealed a lesser known XML export mechanism. To use it visit https://api.del.icio.us/v1/posts/all, enter your username and password and save the resulting XML file.
Now to get my bookmarks into pinboard.in. I broke out my trusty text editor and battered together the script below which works just fine, a few hours later all my bookmarks are in pinboard.in, their bookmarklets are installed in my browser, and I'm loving their read later features. Sean is a happy geek again.
You can download my migration script. To use it :
python delmigrate.py backup.xml username password
Heres the source for the curious.
from xml.dom import minidom import sys import urllib import urllib2 import time user = sys.argv password = sys.argv endpoint = "https://api.pinboard.in" url = "/v1/posts/add?" #open the xml file to import from and parse it f = open(sys.argv, "r") doc = minidom.parse(f).documentElement #keep count of how many urls have been imported urlcount = 0 count = 0 ellength = len(doc.childNodes) failcount = 0 while count < ellength: e = doc.childNodes[count] if e.nodeType == e.ELEMENT_NODE: print "import url %s" % urlcount #get the attributes from the xml href = e.getAttribute("href") description = e.getAttribute("description") extended = e.getAttribute("extended") tags = e.getAttribute("tag") dt = e.getAttribute("time") rargs = dict(url=href, description=description, extended=extended, tags=tags, dt=dt) shared = e.getAttribute("shared") if shared.strip() == 'no': rargs['shared'] = 'no' #convert them to unicode rargs = dict([k, v.encode('utf-8')] for k, v in rargs.items()) print rargs #build the request to send #set up http auth for pinboard.in #doing this for every request may seem wasteful, but urllib2 #seems to forget the auth details after a half dozen requests # if you dont password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm() password_manager.add_password(None, endpoint, user, password) auth_handler = urllib2.HTTPBasicAuthHandler(password_manager) opener = urllib2.build_opener(auth_handler) urllib2.install_opener(opener) request = urllib2.Request(endpoint + url + urllib.urlencode(rargs)) #set the user agent request.add_header('User-Agent','SeansDeliciousMigrater') try: r = opener.open(request) #send the request and read the response response = minidom.parse(r).documentElement.getAttribute("code") except Exception, e: response = str(e) #if we get an invalid response, abort, proabbly throttled if response !="done": failcount += 1 print "Failure: Invalid response: %s" % response if failcount > 4: print "Aborting: Invalid response %s" break else: print "waiting for 30 seconds and retrying" time.sleep(30) else: failcount = 0 count += 1 #put in a delay between requests to reduce odds of throttling time.sleep(1) urlcount += 1 else: count += 1 print "%s urls imported" % urlcount
All the shares are owned by those companies in equal measure, and I can tell you that their regulations are written in Python.
We are proposing to require that most ABS issuers file a computer program that gives effect to the flow of funds, or “waterfall,” provisions of the transaction. We are proposing that the computer program be filed on EDGAR in the form of downloadable source code in Python. …
via Sean McGrath
Every Amazon S3 library I can lay my hands on (for Python at least), seems to read the entire file to be uploaded into memory before sending it. This might be alright when uploading lots of small files, but I have needed to upload a lot of very large files, and my poor old server would creak under the weight of that kind of memory usage.
I managed to bolt a solution together using urllib2 and poster that has been working reliably for me for the past few months. I'm going to show you:
S3 is essentially a big python dictionary in the cloud, you give it a key and a value(file) to store, and later on you can read it back out again. S3 has a nice HTTP API, so you can read and write to the store using standard HTTP libraries.
The area you put your files into is called a bucket. Bucket names (which have restrictions) are globally unique, that is, if you make a bucket called holiday_photos, then no one else using s3 can have a bucket called holiday_photos, which might sound weird, but it has its advantages, you can now access your files from http://holiday_photos.s3.amazonaws.com/. If you set the permissions up so anyone can read the contents of the bucket, the whole world can see you files via http://holiday_photos.s3.amazonaws.com/.
The flip side of this, is that you can upload your files, lets say "meonthebeach.jpg" by using HTTP PUT, in this case PUT to http://holiday_photos.s3.amazonaws.com/meonthebeach.jpg.
When uploading to S3, we need provide a few HTTP headers along with our file data when we PUT.
Authorization - This is the tricky one, S3 requires that your PUT request be accompanied by an authorization string in the following format: AWS AWS_ACCESS_KEY_ID:SIGNATURE The AWS_ACCESS_KEY_ID is the one provided to you when you signed up to S3
The signature is a string consisting of several of the headers you are sending, along with the resource you are putting concatenated, and hashed with your AWS Secret access key. Constructing the signature is quite complicated in the general case, so I am going to show a method of generating it for the specific type of upload request we will be making, if you need to send headers that we are not using here, see Amazons Documentation for how to create the Authentication Header.
The signature string consists of
a code example of creating this
sig_data = "PUT\n\n%s\n%s\nx-amz-acl:public-read\n%s" % ( content_type, date, resource)
We then take this string and create an sha1 hash of it and your secret access key, and base 64 encode it.
signature = base64.encodestring( hmac.new( settings.AWS_SECRET_ACCESS_KEY, sig_data, sha1).digest() ).strip()
and thats your signature.
Poster is a small library that works with urllib2 to allow streaming uploads. All you need to do is import it and call a single function which registers posters custom url openers with urllib2 and you are good to go.
import urllib2 from poster.streaminghttp import register_openers register_openers()
Secondly we need to tell urllib to use HTTP PUT rather than POST. We do this by creating a request object and overriding the get_method
request = urllib2.Request(url, data=data) request.get_method = lambda: 'PUT'
And then we can make our request and read the response
response = urllib2.urlopen(request).read()
The last step for use in poster is that rather than data containing the file object to be uploaded, it should return an iterator that provides the file data chunk by chunk. For example.
def read_data(file_object): while True: r = file_object.read(64 * 1024) if not r: break yield r f = open("text.txt","r") data = read_data(f)
data is now a generator that will return our file a line at a time.
Below is the source for a simple command line tool that will take a filename bucket name, and amazon credentials and upload the file to the bucket making it publicly readable.
import os import sys import time import base64 import hmac import mimetypes import urllib2 from hashlib import sha1 from poster.streaminghttp import register_openers def read_data(file_object): while True: r = file_object.read(64 * 1024) if not r: break yield r def upload_file(filename, bucket, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY): length = os.stat(filename).st_size content_type = mimetypes.guess_type(filename) resource = "/%s/%s" % (bucket, filename) url = "http://%s.s3.amazonaws.com/%s" % (bucket, filename) date = time.strftime("%a, %d %b %Y %X GMT", time.gmtime()) sig_data = "PUT\n\n%s\n%s\nx-amz-acl:public-read\n%s" % ( content_type, date, resource) signature = base64.encodestring( hmac.new( AWS_SECRET_ACCESS_KEY, sig_data, sha1).digest()).strip() auth_string = "AWS %s:%s" % (AWS_ACCESS_KEY_ID, signature) register_openers() input_file = open(filename, 'r') data = read_data(input_file) request = urllib2.Request(url, data=data) request.add_header('Date', date) request.add_header('Content-Type', content_type) request.add_header('Content-Length', length) request.add_header('Authorization', auth_string) request.add_header('x-amz-acl', 'public-read') request.get_method = lambda: 'PUT' urllib2.urlopen(request).read() if __name__ == "__main__": filename = sys.argv bucket = sys.argv AWS_ACCESS_KEY_ID = sys.argv AWS_SECRET_ACCESS_KEY = sys.argv upload_file(filename, bucket, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
I used to work with a guy (Hi Daniel) who got everyone he knew to send him OPML files from their RSS readers so he could find new gems to subscribe to. I'm feeling kind of bored at the moment. So I am going to repeat his experiment. Anyone who reads this, or sees the related tweet, please send me your OPML file. If your RSS reader makes it difficult to export a list of links, then by all means send them in whatever format you like.
In a weeks time, I'll take the results, crunch em a little, and put them up for all to see. So you can get the benefit too. My email address can be grabbed from the contact link to the left. Come on, send me your links!
For the curious, here is my current list of feeds.