AzureMobileService Library Published to Spark.IO

Many of you that know me, know that I love the Spark Core. While I am still looking forward to my Photon coming soon, I have been using these “drop dead simple” wireless devices in a number of projects. In fact, I got to using Azure for many of these projects so a couple of my team mates joined me in creating a library to publish for public use. Shout outs to Paul DeCarlo and Bill Fink for their help along the way.

So here’s how simple it is…grab your Spark Core and let’s go.

Step 1: Create an Azure Mobile Service

If you don’t have an Azure account, you can get a free trial that will give you 30 days of access and $200 credit to use to explore Azure.

First, log into your Azure account at http://manage.windowsazure.com and click the “New” button in the lower left corner

image

Next, click “Compute”, “Mobile Service”, “Create”

image

Fill in all of the required information in the form and click the right pointing arrow in the lower right to continue.

image

Next, enter the database credentials (or create a new SQL Database) and click the Checkmark to complete the creation of the service.

image

Once your mobile service is created, navigate to your Mobile Service to gather a key piece of information we will need for Step 2.

image

Once you click on the mobile service name (in my case “AzureMobileSpark”), you will be taken to the Mobile Service dashboard. From here you need to get a secret key to send data to our service. This is down at the bottom under “Manage Keys”

image

Once you click on that link, you will see a pop-up with your Application and Master Keys. Copy the Application Key to your clipboard…you will need it in a moment.

image

Next, you need to click on the “Data” link to create a new table to store your data coming from the Spark.

image

Enter the required information and set any table level permissions you want to set and you are good to go here. Click the checkmark to create the new table.

image

You are now ready to proceed to the Spark build environment to begin editing code.

Step 2: Create a project on your Core

Launch the Spark IDE at https://www.spark.io/build

Once you have the build environment up and running, click the “Libraries button” in the lower left corner.

SNAGHTML17fcee6e

Search for “AzureMobileService” and click on the library to load the code and samples. You can now select the “1_Create.cpp” and click the “Use this example” button.

image

Finally, change the code to point to the service you created in Step 1 with the correct name and key values and you will be ready to push the code to your Core. You will also need to change the table name in line 19 to match the name of the table you created on the Data tab.

image

Now, that you have pushed the program to your core, you should see the blue LED light up every five seconds to show you it is sending data to Azure. After a short while, go back to your Azure and you should start to see the data in the table you created.

Step 3: Review the data in Azure Mobile Service

Now, log back into Azure and go to your mobile service. When you click on the Data tab, you should see a your tables listed.

image

Once you click on the table you are posting your data to, you should see the data you sent from your core.

image

At this point, you could take any of the values in the “id” column and use some of the other example routines for “Update” or “Delete”

Step 4: Make the data more interesting

You will notice in the create example that the code is currently hard coded in line 24 to column names like “Values1, Values2, Values3” and values of “1,2,3”. You can make this anything you want. For example, you could use a DHT22 sensor publish temperature and humidity like this

{ “TempC” : “23.5”, “TempF” : “74.3”,  “Humidity” : “20.200001” }

image

This is where I jump off. Now you go and make something interesting. Keep in mind that you send JSON to Azure Mobile Services and while you have the service configured for “Enable Dynamic Schema” the database will flex with you along the way. When you are done, be sure to turn that feature off (Under the “configuration” section) so that you can control the structure of the information flowing into your table.

Have fun and post here when you make something cool!

  • Mark Round

    In Step 4 above you talk about using the snprintf(buffer, sizeof(buffer), ” { “TempC” : “23.5”, “TempF” : “74.3”, “Humidity” : “20.200001” }”); to publish live data for example from the DHT22. I cannot understand how to format the snprintf statement to use the variable names. I tried many formats including snprintf(buffer, sizeof(buffer), ” { “TempC” : Value1, “TempF” : Value2, “Humidity” : Value3 }”); but nothing seems to work. Can you show the correct format?

    • BSherwin

      You may need to escape your quotes like this: snprintf(buffer, sizeof(buffer), “{“Value1″:”1”, “Value2″:”2″,”Value”:”3″}”);

      • Mark Round

        Thank you, got it receiving 4 variables now. In the Azure platform what can we now do with this data? I would like to analyze the data for trends, highs, lows, etc. And I would like to perhaps graphical represent the data. Do you know how to look at this data?

        • BSherwin

          This data is all in a SQL Database. You can visualize however you want. This was actually an early version of some work we did on Azure Mobile Services. However, you may be interested in a way to visualize your data in a more real time fashion. Checkout how we did this with a Particle web hook and Azure Event Hubs. You can read more about it here: http://aka.ms/pws or the Connect the Dots project at http://connectthedots.io