DevDay 2015, Inspiration, and a quick look back…

So far this year, which is obviously nowhere near finished yet, I have had some amazing experiences. From .NET Fringe, Polyglot 2015, Progressive .NET Tutorials 2015, to Dev Day 2015 and more. I decided to add a little bit more of a personal note in this blog entry because of inspiration I just got from Michał Śliwoń (@mihcall) on his Dev Day 2015 Aftermath write up.

Just as Michał writes,

“Inspiration is like a spark. It can be one brilliant presentation at the conference, one sentence at some session, one hallway conversation with another attendee and I’m excited, coming back with a head full of new ideas. Every conference has this little spark”

and I completely agree. At .NET Fringe I got back into a few things on the .NET CLR stack, namely F# and a little toying around with Akka .NET and micro-services using those technologies. I also had a hand in organizing and the origins of the conference, which I wrote about. At Polyglot 2015 my desire increased to become more familiar with and comfortable with functional programming languages. At the Progressive .NET Tutorials I was again inspired to dive deeper into functional languages and take a look more closely at everything from Weave and other container and virtualization based systems.

Thrashing Code News

One things that this led me to, is to start putting together a list of people who are interested in these types of conferences. I’m talking about the really down to earth, nitty gritty, get into the weeds of the technology, and meet the people building and using that technology everyday conferences. This list, you can sign up for here and do read the article just below the sign up page, as this is NOT some spam list. I’ll be putting in real effort and time to put together good content when the list officially kicks off! I will blog about, and of course get that first email out about Thrashing Code News in the coming months.

Again at Dev Day I was also inspired by many people and got to meet many people. Which leads me to the number one thing that makes these conferences absolutely great. It’s all about the people who attend.

The People

I got to meet Rob Conery (@robconery / http://rob.conery.io/). We hung out, had beers, talked shop, talked surfing, talked tech and training screencast, discussed future bad ass conferences (again, sign up to my list and I’ll keep you abreast of any mischievious conference Rob & I dive into) and tons more. It was seriously kick ass to meet Rob, especially after not getting a chance to at what must have been a gazillion conferences he and I have both been at before!

I finally met Christian Heilmann (@codepo8) who I also think we must have both been at a gazillion of the same conferences and somehow managed to not meet each other. Good conversation, talk of Seattle, other devlish code happy things – and hopefully a beer or two to be had with good Christian in the near future in Seattle or Portland (or thereabouts).

I had the fortune of running into Alena Dzenisenka (@lenadroid) again doing what she does, which is tell, teach, and show people a whole of awesome F# handiwork. For instance at Dev Day she was throwing down some machine learning math and helping to get people started. She’s also got some talks lined up near the Cascadian (that’s Seattle and Portland, but also San Francisco and Dallas!!) lands if you haven’t noticed, so come get inspired to sling some functional code!

On day one the keynote by Chad Fowler (@chadfowler) was excellent. I’d not realized he was a fellow who escaped the south like I had all while playing a bunch of music! I was able to catch Chad and chat a bit on day two of the conference. His presentation was great, and he’s motivated me to give his book Passionate Programmer a read.

Another individual who I’d been aiming to meet, Mathias Brandewinder (@brandewinder), was also at the conference. I even attended some of his workshop and learned a number of things about F# and machine learning. I’m definitely inspired to dig deeper into many of the machine learning realm and start figuring out more of the truly amazing things we can do with computers and machine learning algorithms – I honestly feel like we’ve only skimmed the surface for much of this technology. Mathias also has a book, that is truly worth buying titled “Machine Learning Projects for .NET Developers“. If you’re curious, yes, I have the book and am working through it steadily!  🙂

Gary Short (@garyshort) provided an amazing talk on digging into crop yields via the European Space Agency Data Science Project. I also enjoyed the multiple conversations that I was able to have with Gary from the talk of “really really really awesomely excited Americans” vs. “excited Americans” all the way to the talks on the matter of data science and crop yields themselves. Gary’s talk is linked below, so get a dose of the crop yields yourself, and any complaints be sure to send to his @robashton twitter account!  (But seriously, you should follow Rob Ashton too as he’s got a lot of good twitter nuggets).

Another person I was super stoked to run into again is Tomas Petricek (who I hear might be in the Cascadian lands of the Seattle area in a month or so). I met Tomas at Progressive .NET Tutorials in London and enjoyed a number of good conversations, and his general awesome personality and hilarious demeanor! Not sure I mentioned, but he’s got some wicked F# chops too. He spoke about Understanding the World with Type Providers, which is something that you should watch as it’s an interesting way to wrap one’s mind around a lot of ideas.

I also, after many random conversations about a whole host of conversation in Functional Programming Slack (follow the link to sign up) chats, got to meet Krzysztof Cieślak (@k_cieslak). Krzysztof (and if you can’t pronounce his name just keep trying, you’ll get it right sometime around 2023) was great to meet and catch up with in person. Also great to hear tidbits about what he’s working on since he’s driving some really cool projects, including working on projects like Ionide Project for the Atom Editor.

There are so many people I enjoyed chatting with and getting to meet, which I really wish I had more time to hang out and chat or hack with everybody more. I met so many other individuals, that I already feel like a prick for not being able to write something about every single awesome person I’ve had a chance to speak with at Dev Day and the subsequent days after the conference. To those I didn’t, sorry about that, drinks and dinner are on me when you’re in Portland!

…on that note, get subscribed to Thrashing Code News so I can update you when the rumblings and dates of the next kick ass conferences, hackathons, hacking festivals, or other great materials, learnings, or such come up. In addition, get inspired to speak, or get involved in some way and help make the next conference you attend as kick ass as you’d want it to be! It’s easy, just fill out your name and email here.

…and to Michał and Rafał I’ll be following up with you guys on some of my next confrence efforts coming up in the Cascadian Pacific Northwest (i.e. Seattle/Portland area)! Cheers!

Mapping Domain Names with name.com, Elastic Beanstalk, Elastic Load Balancer and AWS Route 53

I finally wrapped up my name server and DNS mapping needs with Name.com, Route 53 and Elastic Beanstalk. Since this was a little confusing I thought a short write up was in order. Thanks to Evan @evandbrown for helping out!

The first thing needed is a delegation set of name servers for your DNS and name server provider. These can be found by creating a hosted zone. The way to do this is open up the AWS Management Console and navigate into the Route 53 management area. The Route 53 icon is under the Compute & Networking section on the management console.

Beanstalk, Route 53 - Click for full size image

Beanstalk, Route 53 – Click for full size image

Upon navigating to the Route 53 console area click on the Create Hosted Zones button.

Create Hosted Zone

Create Hosted Zone – Click for full size image

When the zone is created then the delegation set can be found under the Hosted Zone Details. This delegation set now needs setup as the name servers for whoever, in this case name.com, is the domain provider.

Delegation Set - Click for full size image.

Delegation Set – Click for full size image.

Open up the management console for the name server administration.

Upon adding them the list should look something like this.

Name servers list built from the delegation set of the hosted zone. Click for full size image.

Name servers list built from the delegation set of the hosted zone. Click for full size image.

Once the name servers are setup, those will need time to propagate. Likely this could take a good solid chunk of time, somewhere in the hours range likely, and don’t be surprised if it takes a little bit more than a day.

While the propagation starts navigate back to the AWS Management Console and open up the EC2 section of the console. On the right hand side of the Resources list there is a Load Balancers section. Click it.

Load Balancers - Click for full size image.

Load Balancers – Click for full size image.

In this section there is a listing of all load balancers that have been created manually or by Elastic Beanstalk.

Load Balancers - Click for full size image.

Load Balancers – Click for full size image.

Make note of the Load Balancer Name for selection in Route 53. This is what Route 53 needs in order to point an alias at for incoming traffic to that particular Elastic Beanstalk application. In this particular image above there are 4 load balancers listed, the easiest way to prevent confusion is to take note of the load balancer name at the time of creation, but this is the easiest way to find them otherwise.

Record Set - Click for full size image

Record Set – Click for full size image

Now when going back to the hosted zone to set it up with the appropriate information, create a new record with the appropriate name, in this case I was setting up the admin.deconstructed.io (no it isn’t live yet, I just set it up to test it out) to point to an alias target. Just leave the Type set to A – IPv4 address and click the radio control so that Alias is set to Yes. In the alias target select the appropriate load balancer for the Elastic Beanstalk (or whatever it points to) application.

That’s it, give it a few hours (or a day) and eventually the domain or subdomain will be pointed appropriately at the Elastic Beanstalk load balanced application.

Using Bosh to Bootstrap Cloud Foundry via Stark & Wayne Consulting

I finally sat down and really started to take a stab at Cloud Foundry Bosh. Here’s the quick lowdown on installing the necessary bits and getting an initial environment built. Big thanks out to Dr Nic @drnic, Luke Bakken & Brain McClain @brianmmcclain for initial pointers to where the good content is. With their guidance and help I’ve put together this how-to. Enjoy…  boshing.

Prerequisites

Step: Get an instance/machine up and running.

To make sure I had a totally clean starting point I started out with an AWS EC2 Instance to work from. I chose a micro instance loaded with Ubuntu. You can use your local workstation if you want to or whatever, it really doesn’t matter. The one catch, of course is you’ll have to have a supported *nix based operating system.

Step: Get things updated for Ubuntu.

sudo apt-get update

Step: Get cURL to make life easy.

sudo apt-get install curl

Step: Get Ruby, in a proper way.

\curl -L https://get.rvm.io | bash -s stable
source ~/.rvm/scripts/rvm
rvm autolibs enable
rvm requirements

Enabling autolibs sets up so that rvm will install all the requirements with the ‘rvm requirements’ command. It used to just show you what you needed, then you’d have to go through and install them. This requirements phase includes some specifics, such as git, gcc, sqlite, and other tools needed to build, execute and work with Ruby via rvm. Really helpful things overall, which will come in handy later when using this instance for whatever purposes.

Finish up the Ruby install and set it as our default ruby to use.

rvm install 1.9.3
rvm use 1.9.3 --default
rvm rubygems current

Step: Get bosh-bootstrap.

bosh-bootstrap is the easiest way to get started with a sample bosh deployment. For more information check out Dr Nic’s Stark and Wayne repo on Github. (also check out the Cloud Foundry Bosh repo.)

gem install bosh-bootstrap
gem update --system

Git was installed a little earlier in the process, so now set the default user name and email so that when we use bosh it will know what to use for cloning repositories it uses.

git config --global user.name "Adron Hall"
git config --global user.email plzdont@spamme.bro

Step: Launch a bosh deploy with the bootstrap.

bosh-bootstrap deploy

You’ll receive a prompt, and here’s what to hit to get a good first deploy.

Stage 1: I select AWS, simply as I’ve no OpenStack environment. One day maybe I can try out the other option. Until then I went with the tried and true AWS. Here you’ll need to enter your access & secret key from the AWS security settings for your AWS account.

For the region, I selected #7, which is west 2. That translates to the data center in Oregon. Why did I select Oregon? Because I live in Portland and that data center is about 50 miles away. Otherwise it doesn’t matter which region you select, any region can spool up almost any type of bosh environment.

Stage 2: In this stage, select default by hitting enter. This will choose the default bosh settings. The default uses a medium instance to spool up a good default Cloud Foundry environment. It also sets up a security group specifically for Cloud Foundry.

Stage 3: At this point you’ll be prompted to select what to do, choose to create an inception virtual machine. After a while, sometimes a few minutes, sometimes an hour or two – depending on internal and external connections – you should receive the “Stage 6: Setup bosh” results.

Stage 6: Setup bosh

setup bosh user
uploading /tmp/remote_script_setup_bosh_user to Inception VM
Initially targeting micro-bosh…
Target set to `microbosh-aws-us-west-2′
Creating initial user adron…
Logged in as `admin’
User `adron’ has been created
Login as adron…
Logged in as `adron’
Successfully setup bosh user
cleanup permissions
uploading /tmp/remote_script_cleanup_permissions to Inception VM
Successfully cleanup permissions
Locally targeting and login to new BOSH…
bosh -u adron -p cheesewhiz target 54.214.0.15
Target set to `microbosh-aws-us-west-2′
bosh login adron cheesewhiz
Logged in as `adron’
Confirming: You are now targeting and logged in to your BOSH

ubuntu@ip-yz-xyz-xx-yy:~$

If you look in your AWS Console you should also see a box with a key pair named “inception” and one that is under the “microbosh-aws-us-west-2” name. The inception instance is a m1.small while the microbosh instance is an m1.medium.

That should get you going with bosh. In my next entry around bosh I’ll dive into some of Dr Nic & Brian McClain’s work before diving into what exactly Bosh actually is. As one may expect, from Stark & Wayne we can expect some pretty cool stuff, so keep an eye over there on Stark & Wayne.

Light up a Riak Cluster with AWS, A Few Notes…

I wanted to write up an intro to getting Riak installed on AWS, even though the steps are absurdly simple and already available on the Basho Docs site, there’s a few extra notes that can be very helpful for a few specific points during the process.

Start off by logging into AWS. At this point you can take two different paths that are almost identical. You can follow the path of using the pre-built AWS Marketplace image of Riak, or just start form scratch. The difference is a total of about 2 steps; installing & setting some security port connections. I’m going to step through without using the prebuilt image in these instructions.

Security Group

First thing you’ll need to get a security group with the correct permissions setup. For that, you’ll need to make a security group.

NOTE: No, I didn’t mean to misspell Riak, but it’s in there now.  😉

Before adding the ports, go to the security group details tab and copy the security group id. I’ve pointed it out in the image above.

Now add the following three and assign the security group to the ports; 4369, 8099 & 6000-7999. For the source set it to the security group id. Once you get all three added the list should look like this (below). For each rule click the Add Rule button and remember to click the Apply Rule Changes. I often forget this because the screen on some of the machines I use only shows to the bottom of the Add Rule button, so you’ll have to scroll down to find the Apple Rule Changes button.

Now add the standard port 22 for SSH. Next get the final two of 8087 and 8098 setup and we’re ready for moving on to creating the virtual machines.

Server Virtual Machines

For creating virtual machines I just clicked on Launch Instance and used the classic wizard. From there you get a selection of items. I’ve used the AWS image to do this, but would actually suggest using a CentOS image of your choice or Red Hat Enterprise Linux (RHEL). Another great option is to use the Ubuntu 12.04 LTS. Really though, use whatever Linux version or distro you like, there are 1-2 step instructions for installing Riak on almost every distro out.

Next just launch a single instance. We’ll be able to launch duplicates of these further along in the process. I’ve selected a “Micro” here but I’m not intending to do anything with a remotely heavy load right now. At some point, I’ll upgrade this cluster to larger instances when I start putting it under a real load. I’ll have another blog entry to describe exactly how I do this too.

Keep hitting continue until you get to the key pair selection. Pick the key pair you want, either making a new one for this cluster or use one you already have. Either way works fine.

Continue again until you can select the security group that we created above.

Now keep hitting that continue button, until you get to launch, and launch this thing. Once the instance is launched launch your preferred SSH connection tooling. The easiest way I’ve found for getting the most current private IP to connect to with the appropriate command is to right click on the instance in the AWS Console and click on Connect. There you’ll find the command to connect via SSH.

Paste that in and hit enter in your SSH App, you’ll see something akin to this.

$ cd Codez/working-content/
$ ssh -i riaktionz.pem root@ec2-54-245-201-97.us-west-2.compute.amazonaws.com
The authenticity of host 'ec2-54-245-201-97.us-west-2.compute.amazonaws.com (54.245.201.97)' can't be established.
RSA key fingerprint is 31:18:ac:1a:ac:fc:6e:6d:55:e8:8a:83:9a:8f:c7:5f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-54-245-201-97.us-west-2.compute.amazonaws.com,54.245.201.97' (RSA) to the list of known hosts.
Please login as the user "ubuntu" rather than the user "root".

Enter yes to continue connecting. For some instance types, like Ubuntu you’ll have to do some teaks to log into as “ubuntu” vs. “root” and the same goes for the AWS image or others. I’ll leave that to you, dear reader to get connected via ole’ SSH.

One of the other things, that you may have to do some tweaking about and googling, is figuring out the firewall setups on the various virtual machine images. For the RHEL you’ll want to turn off the firewall or open up the specific connection ports and such. Since the AWS firewall does this, it isn’t particularly important for the OS to continue running its firewall service. In this case, I’ve turned off the OS firewall and just rely on the AWS firewall. To turn off the RHEL firewall, execute the following commands.

[root@ip-x-x-x-x]# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  OK  ]
[root@ip-x-x-x-x]# service iptables stop
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Unloading modules:                               [  OK  ]
[root@ip-x-x-x-x]# chkconfig iptables off
[root@ip-x-x-x-x]#

Now is a perfect time to start those other instances. Navigate into the AWS Console again and right click on the virtual machine instance you’ve created. On that menu select Launch More Like This.

Go through and check the configuration on each of these, make sure the firewall is turned off, etc. Then move on to the next step and install Riak and cluster them. So it’s time to get to the distributed, massively complex, extensive list of steps to install & cluster Riak. Ok, so that’s sarcasm.  😉

Step 1: Install Riak

Install Riak on each of the instances.

package=basho-release-6-1.noarch.rpm && \
wget http://yum.basho.com/gpg/$package -O /tmp/$package && \
sudo rpm -ivh /tmp/$package
sudo yum install riak

NOTE: For other installation methods, such as directly downloading the RPM or other Linux OSes, check out the http://docs.basho.com/riak/latest/tutorials/installation/Installing-on-RHEL-and-CentOS/.

Step 2: Setup the Cluster

On the first instance, get the IP. You won’t need to do anything to this instance, just keep the IP handy. Then move on to the second instance and run the cluster command.

sudo riak-admin cluster join riak@<ip_of_the_first_node>

Do this on each of the instances you’ve added, using that first node. When you’ve added them all, on that last instance (or really any of them) then run the plan. This will get you a display plan of what will take place when the cluster is committed.

sudo riak-admin cluster plan

If that looks all cool. Commit the plan.

sudo riak-admin cluster commit

Get a check of the cluster.

sudo riak-admin member_status

That’s it all done. You know have a Riak Cluster. For more operations to try out your cluster, check out this list of base API Operations.

First Looks @ AWS Toolkit for Visual Studio 2010

I’ll be presenting on the AWS Toolkit for Visual Studio 2010 in the very near future (Check out the SAWSUG Meetup on October 12th, that’s this Wednesday). I’ll be covering a number of things about the new AWS Toolkit for Visual Studio. My slides are available below (with links to the Google Docs and Slideshare Versions).

Direct link to Google Docs Presentation or the SlideShare Presentation.

The code for the presentation is available on Github under AWS-Toolkit-Samples. Beware, this code will be changing over time, the core will stay the same though.

Cloud Failure, FUD, and The Whole AWS Oatage…

Ok.  First a few facts.

  • AWS has had a data center problem that has been ongoing for a couple of days.
  • AWS has NOT been forthcoming with much useful information.
  • AWS still has many data centers and cloud regions/etc up and live, able to keep their customers up and live.
  • Many people have NOT built their architecture to be resilient in the face of an issue such as this.  It all points to the mantra to “keep a backup”, but many companies have NOT done that.
  • Cloud Services are absolutely more reliable than comparable hosted services, dedicated hardware, dedicated virtual machines, or other traditional modes of compute + storage.
  • Cloud Services are currently the technologically superior option for compute + storage.

Now a few personal observations and attitudes toward this whole thing.

If you’re site is down because of a single point of failure that is your bad architectural design, plain and simple. You never build a site like that if you actually expect to stay up with 99.99% or even 90% of the time. Anyone in the cloud business, SaaS, PaaS, hosting or otherwise should know better than that. Everytime I hear someone from one of these companies whining about how it was AWSs responsiblity, I ask, is the auto manufacturer responsible for the 32,000 innocent dead Americans in 2010? How about the 50,000 dead in the year of peak automobile deaths? Nope, those deaths are the responsiblity of the drivers. When you get behind the wheel you need to, you MUST know what power you yield. You might laugh, you might jest that I use this corralary, but I wanted to use an example ala Frédéric Bastiat (if you don’t know who he is, check him out: Frédéric Bastiat). Cloud computing, and its use, is a responsibility of the user to build their system well.

One of the common things I keep hearing over and over about this is, “…we could have made our site resilient, but it’s expensive…”  Ok, let me think for a second.  Ummm, I call bullshit.  Here’s why.  If you’re a startup of the most modest means, you probably need to have at least 100-300 dollars of services (EC2, S3, etc) running to make sure you’re site can handle even basic traffic and a reasonable business level (i.e. 24/7, some traffic peaks, etc).  With just $100 bucks one can setup multiple EC/2 instances, in DIFFERENT regions, load balance between those, and assure that they’re utilizing a logical storage medium (i.e. RDS, S3, SimpleDB, Database.com, SQL Azure, and the list goes on and on).  There is zero reason that a business should have their data stored ON the flippin’ EC2 instance.  If it is, please go RTFM on how to build an application for the Internets.  K Thx. Awesomeness!!  🙂

Now there are some situations, like when Windows Azure went down (yeah, the WHOLE thing) for about an hour or two a few months after it was released.  It was, however, still in “beta” at the time.  If ALL of AWS went down then these people who have not built a resilient system could legitimately complain right along with anyone else that did build a legitimate system. But those companies, such as Netflix, AppHarbor, and thousands of others, have not had downtime because of this data center problem AWS is having.  Unless you’re on one instance, and you want to keep your bill around $15 bucks a month, then I see ZERO reason that you should still be whining.  Roll your site up somewhere else, get your act together and ACT. Get it done.

I’m honestly not trying to defend AWS either.  On that note, the response time and responses have been absolutely horrible. There has been zero legitimate social media, forum, or responses that resemble an solid technical answer or status of this problem. In addition to this Amazon has allowed the media to run wild with absolutely inane and sensational headlines and often poorly written articles.  From a technology company, especially of Amazon’s capabilities and technical prowess (generally, they’re YEARS ahead others) this is absolutely unacceptable and disrespectful on a personal level to their customers and something that Amazon should mature their support and public interaction along with their technology.

Now, enough of me berating those that have fumbled because of this. Really, I do feel for those companies and would be more than happy to help straighten out architectures for these companies (not for free). Matter of fact, because of this I’ll be working up some blog entries about how to put together a geographically resilient site in the cloud.  So far I’ve been working on that instead of this rant, but I just felt compelled after hearing even more nonsense about this incident that I wanted to add a little reason to the whole fray.  So stay tuned and I’ll be providing ways to make sure that a single data-center issues doesn’t tear down your entire site!

UPDATE:  If you know of a well written, intelligent response to this incident, let me know and I’ll add the link here.  I’m not linking to any of the FUD media nonsense though, so don’t send me that junk.  🙂  Thanks, cheers!

Cloud Formation

Home -> Speaking, Presentations, & Workshops

Here’s the presentation materials that I’ve put together for tonight.


Check my last two posts regarding the location & such: