Don’t Learn to Code, That’s Just Nonsense… Learn Instead!

photodune-2102225-coding-blue-screen-lI keep reading post after post after post. “Learn to code”, “Everybody Should Learn to Code”, “In the Future Everyone Will Need to Know How to Code, Learn Now!”, and so many more. I’m going to point out a few very important tips to life. These tips are especially important when it comes to programming, or NOT programming.

Then, I read something that was like a lovely breath of fresh air. It was as if someone had actually pondered the world for just one more second past “LEARN TO CODE” and though, “naw, that’s probably a bit much”. I read Yevgeniy Brikman’s blog post “Don’t Learn to Code, Learn to Think“. You might have seen it. It’s been picked up by more than a few other media sources and extensively tweeted.

Now, in Yevgeniy’s article there is a lot of talk about still learning Computer Science. I also agree with that, versus just learning the semantic and pedantic details of just programming. However I’m going a step further and advise this…

DON’T LEARN TO CODE UNLESS YOU ABSOLUTELY HAVE TO!

In addition to that advice, here’s some serious advice that goes above and beyond what to learn in school. This very short list of tips is about what to learn for life, to do well in life, and not just for the petty demands of school. These are tips are for any competent person, who wants to excel in anything that they do. If you’re about to get into high school, college, or whatever. These are the things you should be focused on and thinking about extensively.

Systems Thinking

ContainersDon’t learn to write code just to learn to program. That’s just absurd and it’s a waste of time. Learn to think, and not just to think, but to think about systems. Learn to understand how systems work together at a deep level.

An example, give into something like the freight system in North America (or Europe) and figure out how it works functionally from strategic to tactical levels.

Learn to throw away confirmation bias, congnitive dissonance and always ask yourself if you have the data you need to draw the connections about life in an accurate and meaningful way. If you don’t, keep asking questions and keep learning.

Autodidactism

Abstract ConstructionBecome an autodidact. You instantly become dramatically more powerful than anyone that requires structured teaching to attain new knowledge. It opens up doors and instantly gives you an advantage in any conversation, any new skill you want, and in visiting anywhere in the world. When you learn to learn the world changes for you. You’ll be able to understand and deduct conclusions and solutions faster than any counterpart that can’t do this. It also gives you a significant advantage in programming, if you do decide you want to code.

In the process of becoming an autodidact one helpful idea is to actually study learning. Take a course, get a book, find content, and just observe yourself. Get as much information as you can about how people learn and read it or watch videos on it. Whatever the case, get as many ideas and strategies about how you learn and what the most effective methods are for you to gain a deep, systemic, and ordered knowledge of topics you want to understand.

Summarized

JobThese two skills will give you far more of an advantage in life then merely learning to program or taking classes or going to school to learn to program/code/hack/whatever. So do yourself a favor, don’t get suckered by the “Learn programming and get a good job QUICK!” nonsense. If you gain the two skills I mentioned and get your brain sorted, you’ll do much better than merely learning to program and you’ll thank yourself in the future!

I can promise you that!  🙂

__4 “CD Is Working, Let’s Get a Site Live with Loopback!”

Since it has been more than a few weeks let’s do a quick recap of the previous posts in this series.

  1. Introducing the Thrashing Code Team & Projects” – Know who’s working on what and what the projects are.
  2. Getting Started, Kanban & First Steps for a Sharing App” – Getting the kanban put together and the team involved.
  3. Starting a Basic Loopback API & Continuous Integration” – Getting the skeleton of the API application setup and the continuous integration services running.
  4. Going the Full Mile, Continuous Delivery” – Here the team got the full continuous delivery process setup for ongoing development efforts.

In this article of the series I work with some of my cohort to get initial parts of the application deployed to production. For the first part of this, let’s get some of the work that Norda has done into the project and get it running on the client side.

Client Side

client directory of the loopback project.

client directory of the loopback project.

The first thing I needed was to get a static page setup. This page would then be used to submit an email to the server side that would then deal with whatever processing that I would have for the email message confirmation and related actions.

In Loopback, it is very easy to setup a static page. I just needed a simple index.html page so I created an empty one in the client directory of the project.

The first thing I did was literally put an index.html file with the words “this should work” in the client directory of the project. But after some tweaking and adding in Norda’s work the team ended up with the following.

The next thing to do is setup the Loopback.io framework to host a static site page.

Static Page via Loopback.io

This part of the article is taken pretty much straight out of the static page hosting Strongloop Loopback.io Documentation.

Applications typically need to serve static content such as HTML and CSS files, client JavaScript files, images, and so on.  It’s very easy to do this with the default scaffolded LoopBack application.  You’re going to configure the application to serve any files in the /client directory as static assets.

First, you have to disable the default route handler for the root URL.   Remember back in Create a simple API (you have been following along, haven’t you?) when you loaded the application root URL, http://localhost:3000/, you saw the application respond with a simple status message such as this:

{"started":"2014-11-20T21:59:47.155Z","uptime":42.054}

This happens because by default the scaffolded application has a boot script named root.js that sets up route-handling middleware for the root route (“/”):

server/boot/root.js

module.exports = function(server) {  // Install a `/` route that returns server status
  var router = server.loopback.Router();
  router.get('/', server.loopback.status());
  server.use(router);
};

This code says that for any GET request to the root URI (“/”), the application will return the results of loopback.status().

To make your application serve static content you need to disable this script.  Either delete it or just rename it to something without a .js ending (that ensures the application won’t execute it).

Define static middleware

Next, you need to define static middleware to serve files in the /client directory.

Edit server/middleware.json.  Look for the “files” entry:

...
  "files": {
  },
...

server/middleware.json

Add the following:

...
  "files": {
    "loopback#static": {
      "params": "$!../client"
    }
  },
...

These lines define static middleware that makes the application serve files in the /client directory as static content.  The $! characters indicate that the path is relative to the location of middleware.json.

Add an HTML file

Now, the application will serve any files you put in the /client directory as static (client-side) content.  So, to see it in action, add an HTML file to /client.  For example, add a file named index.html with this content:

/client/index.html

<head>
    <title>LoopBack</title>
</head>
<body>
    <h1>Whatevs</h1>
    <p>
        Some static content...  but just look at the big HTML file below!  🙂
    </p>
</body>

Of course, you can add any static HTML you like–this is just an example.

Graphic Assets

Norda setup the project with a number of assets for the current page creation as well as future pages the site will need. The theme is a high quality theme, with the Coder Swap Logo added at respective key locations. The following are the graphics assets included in the project under the client directory here.

The key files that are included that will be used for our first site we deploy are the various favicon, logo, and related images. You can see those in the root of the client directory. There are a whole bunch of them because of the funky mobile design requirements.

Index.html

The first page I’ve setup is a simple sign up page, with no real functionality. Just something to get started with to build off of. The code for that page looks like this.

<!DOCTYPE html>
<html>
<head>
    <title>
        Code Swap - Notify Me!
    </title>
    <link href="http://fonts.googleapis.com/css?family=Lato:100,300,400,700" media="all" rel="stylesheet"
          type="text/css"/>
    <link href="stylesheets/bootstrap.min.css" media="all" rel="stylesheet" type="text/css"/>
    <link href="stylesheets/font-awesome.min.css" media="all" rel="stylesheet" type="text/css"/>
    <link href="stylesheets/se7en-font.css" media="all" rel="stylesheet" type="text/css"/>
    <link href="stylesheets/style.css" media="all" rel="stylesheet" type="text/css"/>
    <link rel="shortcut icon" href="favicon.ico">
    <link rel="apple-touch-icon" sizes="57x57" href="apple-icon-57x57.png">
    <link rel="apple-icon" sizes="114x114" href="apple-icon-114x114.png">
    <link rel="apple-icon" sizes="72x72" href="apple-icon-72x72.png">
    <link rel="apple-icon" sizes="144x144" href="apple-icon-144x144.png">
    <link rel="apple-icon" sizes="60x60" href="apple-icon-60x60.png">
    <link rel="apple-icon" sizes="120x120" href="apple-icon-120x120.png">
    <link rel="apple-icon" sizes="76x76" href="apple-icon-76x76.png">
    <link rel="apple-icon" sizes="152x152" href="apple-icon-152x152.png">
    <link rel="apple-icon" sizes="180x180" href="apple-touch-icon-180x180.png">
    <link rel="icon" type="image/png" href="favicon-192x192.png" sizes="192x192">
    <link rel="icon" type="image/png" href="favicon-160x160.png" sizes="160x160">
    <link rel="icon" type="image/png" href="favicon-96x96.png" sizes="96x96">
    <link rel="icon" type="image/png" href="favicon-16x16.png" sizes="16x16">
    <link rel="icon" type="image/png" href="favicon-32x32.png" sizes="32x32">
    <meta name="msapplication-TileColor" content="#ffffff">
    <meta name="msapplication-TileImage" content="mstile-144x144.png">
    <meta name="msapplication-config" content="browserconfig.xml">

    <script src="http://code.jquery.com/jquery-1.10.2.min.js" type="text/javascript"></script>
    <script src="http://code.jquery.com/ui/1.10.3/jquery-ui.js" type="text/javascript"></script>
    <script src="javascripts/bootstrap.min.js" type="text/javascript"></script>
    <script src="javascripts/modernizr.custom.js" type="text/javascript"></script>
    <script src="javascripts/main.js" type="text/javascript"></script>
    <meta content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no" name="viewport">
</head>
<body class="login2">
<!-- Registration Screen -->
<div class="login-wrapper">
    <a href="./">
        <img width="100"
             height="30"
             src="images/logo-login@2x.png"/>
    </a>
    <form>
        <div class="form-group">
            <div class="input-group">
                <span class="input-group-addon">
                    <i class="fa fa-envelope"></i>
                </span>
                <input class="form-control"
                       type="text"
                       value="Enter your email address">
            </div>
        </div>
        <input class="btn btn-lg btn-primary btn-block"
               type="submit"
               value="Register for updates!">
    </form>
</div>
<!-- End Registration Screen -->
</body>
</html>

As you can see a bulk of the page is favicon and logo related nonsense while about a 1/3rd of the screen is actually the HTML for the form itself. What that looks like when rendered is something like this.

The registration page.

Once that page is up we can then commit to the production branch as outlined in the previous blog entry.

btw – If you’re curious (especially if you’ve read the intro blog entry about the mock team), you might be wondering where and who and how did I create this gorgeous theme? Well, I didn’t, I actually purchased this sweet theme from Theme Forrest. The specific theme is se7en.

_____101 |> F# Coding Ecosystem: Paket && Atom w/ Paket

One extremely useful tool to use with F# is Paket. Paket is a package manager that provides a super clean way to manage your dependencies. Paket can handle everything from Nuget dependencies to git or file dependencies. It really opens up your project capabilities to easily pull in and handle dependencies, whereever they are located.

I cloned the Paket Project first, since I would like to have the very latest and help out if anything came up. For more information on Paket check out the about page.

git clone git@github.com:fsprojects/Paket.git

I built that project with the respective ./build.sh script and all went well.

./build.sh

NOTE – Get That Command Line Action

One thing I didn’t notice immediately in the docs (I’m putting in a PR right after this blog entry) was anyway to actually get Paket setup for the command line. On bash, Windows, or whatever, it seemed a pretty fundamental missing piece so I’m going to doc that right here but also submit a PR based on the issue I added here). It could be I just missed it, but either way, here’s the next step that will get you setup the rest of the way.

./install.sh

Yeah, that’s all it was. Kind of silly eh? Maybe that’s why it isn’t documented that I could see? After the installation script is run, just execute paket and you’ll get the list of the various commands, as shown below.

$ paket
Paket version 1.31.1.0
Command was:
  /usr/local/lib/paket/paket.exe
available commands:

	add: Adds a new package to your paket.dependencies file.
	config: Allows to store global configuration values like NuGet credentials.
	convert-from-nuget: Converts from using NuGet to Paket.
	find-refs: Finds all project files that have the given NuGet packages installed.
	init: Creates an empty paket.dependencies file in the working directory.
	auto-restore: Enables or disables automatic Package Restore in Visual Studio during the build process.
	install: Download the dependencies specified by the paket.dependencies or paket.lock file into the `packages/` directory and update projects.
	outdated: Lists all dependencies that have newer versions available.
	remove: Removes a package from your paket.dependencies file and all paket.references files.
	restore: Download the dependencies specified by the paket.lock file into the `packages/` directory.
	simplify: Simplifies your paket.dependencies file by removing transitive dependencies.
	update: Update one or all dependencies to their latest version and update projects.
	find-packages: EXPERIMENTAL: Allows to search for packages.
	find-package-versions: EXPERIMENTAL: Allows to search for package versions.
	show-installed-packages: EXPERIMENTAL: Shows all installed top-level packages.
	pack: Packs all paket.template files within this repository
	push: Pushes all `.nupkg` files from the given directory.

	--help [-h|/h|/help|/?]: display this list of options.

Paket Elsewhere && Atom

If you’re interested in Paket with Visual Studio I’ll let you dig into that on your own. Some resources are Paket Visual Studio on Github and Paket for Visual Studio. What I was curious though was Paket integration with either Atom or Visual Studio Code.

Krzysztof Cieślak (@k_cieslak) and Stephen Forkmann (@sforkmann) maintain the Paket.Atom Project and Krzysztof Cieślak also handles the atom-fsharp project for Atom. Watch this gif for some of the awesome goodies that Atom gets with the Paket.Atom Plugin.

Click for fullsize image of the gif.

Click for fullsize image of the gif.

Getting Started and Adding Dependencies

I’m hacking along and want to add some libraries, how do I do that with Paket? Let’s take a look. This is actually super easy, and doesn’t make the project dependentant on peripheral tooling like Visual Studio when using Paket.

The first thing to do, is inside the directory or project where I need the dependency I’ll intialize the it for paket.

paket init

The next step is to add the dependency or dependencies that I’ll need. I’ll add a Nuget package that I’ll need shortly. The first package I want to grab for this project is FsUnit, a testing framework project managed and maintained by Dan Mohl @dmohl and Sergey Tihon @sergey_tihon.

paket add nuget FsUnit

When executing this dependency addition the results displayed show what other dependencies were installed and which versions were pegged for this particular dependency.

✔ ~/Codez/sharpPaketsExample
15:33 $ paket add nuget FsUnit
Paket version 1.33.0.0
Adding FsUnit to /Users/halla/Codez/sharpPaketsExample/paket.dependencies
Resolving packages:
 - FsUnit 1.3.1
 - NUnit 2.6.4
Locked version resolution written to /Users/halla/Codez/sharpPaketsExample/paket.lock
Dependencies files saved to /Users/halla/Codez/sharpPaketsExample/paket.dependencies
Downloading FsUnit 1.3.1 to /Users/halla/.local/share/NuGet/Cache/FsUnit.1.3.1.nupkg
NUnit 2.6.4 unzipped to /Users/halla/Codez/sharpPaketsExample/packages/NUnit
FsUnit 1.3.1 unzipped to /Users/halla/Codez/sharpPaketsExample/packages/FsUnit
3 seconds - ready.

I took a look in the packet.dependencies and packet.lock file to see what were added for me with the paket add nuget command. The packet.dependencies file looked like this now.

source https://nuget.org/api/v2

nuget FsUnit

The packet.lock file looked like this.

NUGET
  remote: https://nuget.org/api/v2
  specs:
    FsUnit (1.3.1)
      NUnit (2.6.4)
    NUnit (2.6.4)

There are a few more dependencies that I want, so I went to work adding those. First of this batch that I added was FAKE (more on this in a subsequent blog entry), which is a build tool based off of RAKE.

paket add nuget FAKE

Next up was FsCheck.

paket add nuget FsCheck

The paket.dependencies file now had the following content.

source https://nuget.org/api/v2

nuget FAKE
nuget FsCheck
nuget FsUnit

The paket.lock file had the following items added.

NUGET
  remote: https://nuget.org/api/v2
  specs:
    FAKE (4.1.4)
    FsCheck (2.0.7)
      FSharp.Core (>= 3.1.2.5)
    FSharp.Core (4.0.0.1)
    FsUnit (1.3.1)
      NUnit (2.6.4)
    NUnit (2.6.4)

Well, that got me started. The code repository at this state is located on this branch here of the sharpSystemExamples repository. So on to some coding and the next topic. Keep reading, subsribe, or hit me up on twitter @adron.

References

Linux Containers

Docker Tips n’ Tricks for Devs – #0001 – 3 Second to Postgresql

The easiest implementation of a docker container with Postgresql that I’ve found recently allows the following commands to pull and run a Postgresql server for you.

docker pull postgres:latest
docker run -p 5432:5432 postgres

Then you can just connect to the Postgresql Server by opening up pgadmin with the following connection information:

  • Host: localhost
  • Port: 5432
  • Maintenance DB: postgres
  • Username: postgres
  • Password:

With that information you’ll be able to connect and use this as a development database that only takes about 3 seconds to launch whenever you need it.

The Question of Docker, The Future of OS Virtualization

In this article I’m going to take a look at Docker and OS Virtualization autonomously of each other. There’s a reason, which will unfold as I dig through some data and provide this look into what is and isn’t happening in the virtualization space.

It’s important to also note what methods were used to attain the information provided in this article. I have obtained information through speaking with Docker employees and key executives including Ben Golub and founder Solomon Hykes over the years since the founding of Docker (and it’s previous incarnation dotCloud, before the pivot and name change to Docker).

Beyond communicating directly with the Docker team and gaining insight from them I have also done a number of interviews over the course of 4 days. These interviews have followed a fairly standard set of questions and conversation about the Docker technology, including but not limited to the following questions.

  • What is your current use of Docker visualization technologies?
  • What is your future intended use of Docker technologies?
  • What is the general current configuration and setup of your development team(s) and tooling that they use (i.e. stack: .NET, java, python, node.js, etc)
  • Do you find it helps you to move forward faster than without?

The History of OS-Level Virtualization

First, let’s take a look at where virtualization has been, then I’ll dive into where it is now, and then I’ll take a look at where it appears to be going in the future and derive some information from the interviews and discussions that I’ve had with various teams over the last 4 days.

The Short of It

OS-level virtualization is a virtualization application that allows the installation of software in a complete file system, just like a hypervisor based virtualization server, but dramatically faster installation and prospectively speed overall by using the host OS for OS-level virtualization. This cuts down on excess redundancies
within the core system and the respective virtual clients on the host.

Virtualization in concept has been around since the 1960s, with IBM being heavily involved at the Cambridge Scientific Center. Over time developments continued, but the real breakthrough in pushing virtualization into the market was VMware in 1999 with their virtual platform. This, hypervisor level virtualization great into a huge industry with the help of VMware.

However OS-level virtualization, which is what Docker is based on, didn’t take off immediately when introduced. There were many product options that came out over time around OS-level virtualization, but nothing made a huge splash in the industry similar to what Docker has. Fast forward to today and Docker was released in 2013 to an ever increasing developer demand and usage.

Timeline of Virtualization Development

Docker really brought OS-level virtualization to the developer community at the right time in regards to demands around web development and new ways to implement effective continuous delivery of applications. Docker has been one of the most extensively used OS-level virtualization tools to implement immutable infrastructure, continuous build, integration, and deployment environments, and to use as a general virtual environment to spool up resources as needed for development.

Where we Are With Virtualization

Currently Docker holds a pretty dominant position in the OS-level virtualization market space. Let’s take a quick review of their community statistics and involvement from just a few days ago.

The Stats: Docker on Github -> https://github.com/docker/docker

Watchers: 2017
Starred: 22941
Forks: 5617

16,472 Commits
3 Branches
102 Releases
983 Contributors

Just from that data we can ascertain that the Docker Community is active. We can also take a deep look into the forks and determine pull requests, acceptance of and related data to find out that the overall codebase is healthy with involvement. This is good to know since at one point there were questions if Docker had the capability to manage the open source legions pushing the product forward while maintaining the integrity, reputation, and quality of the product.

Now let’s take a look at what that position is based on considering the interviews I’ve had in the last 4 days. Out of the 17 people I spoke with all knew what Docker is. That’s a great position to be in compared to just a few years ago.

Out of the 17 people I spoke with, 15 of the individuals are working on teams that have, are implementing or are in some state between having and implementing Docker into their respective environments.

Of the 17, only 13 said they were concerned in some significant way about Docker Security. All of these individuals were working on teams attempting to figure out a way to use Docker in a production way, instead of only in development or related uses.

The list of uses that the 17 want to use or are using Docker for vary as much as the individual work that each is currently working on. There are however some core similarities in what they’re working on where Docker comes into play.

The most common similarity among Docker uses is simply as a platform to build out development testing environments or test servers. This is routinely a database server or simple distributed database like Cassandra or Riak, that can be built immutably, then destroyed and recreated whenever it is needed again for test and development. Some of the build outs are done with Docker specifically to work up a mock distributed database environment for testing. Mind you, I’m probably hearing about and seeing this because of my past work with Basho and other distributed systems programmers, companies, and efforts around this type of technology. It’s still interesting and very telling none the less.

The second most common usage is for Docker to be used somewhere in the continuous delivery chain. The push to move the continuous integration and delivery process to a more immutable, repeatable, and reliable process has been a perfect marriage between Docker and these needs. The ability to spin up entire environments in a matter of seconds and destroy them on whim, creating them again a matter of moments later, as made continuous delivery more powerful and more possible than it has ever been.

Some of the less common, yet still key uses of Docker, that came up during the interviews included; in memory cache servers, network virtualization, and distributed systems.

Virtualization’s Future

Pathing

With the history covered, the core uses of Docker discussed, let’s put those on the table with the acquisitions. The acquisitions by Docker have provided some insight into the future direction of the company. The acquisitions so far include: Kitematic, SocketPlane, Koality, and Orchard.

From a high level strategic play, the path Docker is pushing forward into is a future of continued virtualization around, as the hipsters might say “all the things”. With their purchase of Kitematic and SocketPlane. Both of these will help Docker expand past only OS virtualization and push more toward systemic virtualization of network environments with programmatic capabilities and more. These are capabilities that are needed to move past the legacy IT environments of yesteryear which will open up more enterprise possibilities too.

To further their core use that exists today, Docker has purchased Koality. Koality provides parallelizable continuous integration, deployment, and related services. This enables Docker to provide more built out services around this very important.

The other acquisition was Orchard (orchardup.com). This is a startup that provides a Docker host in the cloud, instantly. This is a similar purchase as the Koality one. It bulks up capabilities that Docker had some level of already. It also pushes them forward with two branches of capabilities: SaaS based on the web and prospectively offering something behind the firewall, which the Koality acquisition might have some part to play also.

Threat Vectors

Even though the pathways toward the future seem clear for Docker in many ways, in other ways they see dramatically less clear. For one, there are a number of competitive options that are in play now, gaining momentum and on the horizon. One big threat is Google’s lack of interest in Docker has led them to build competing tooling. If they push hard into the OS level virtualization space they could become a substantial threat.

The other threat vector, is the simple unknown of what could become a threat. Something like Mesos might explode in popularity and determine it doesn’t want to use Docker, and focus on another virtualization path. In the same sense, Mesos could commoditize Docker to a point that the value add at that level of virtualization doesn’t retain a business market value that would sustain Docker.

The invisible threat around this area right now is fairly large. There’s no greater way to determine this then to just get into a conversation with some developers about Docker. In one sense they love what it allows them to do, but the laundry list of things they’d like would allow for a disruptor to come in and steal the Docker thunder pretty easily. To put it simply, there isn’t a magical allegiance to Docker, developers will pick what helps them move the ball forward the fastest and easiest.

Another prospective threat is a massive purchase by a legacy software company like Oracle, Microsoft, or someone else. This could effectively destabilize the OSS aspects of the product and slow down development and progress, yet it could increase corporate adoption many times over what it is now. So this possibility is something that shouldn’t be ruled out.

Summary

Docker has two major threats: the direct competitor and their prospectively being leapfrogged by another level of virtualization. The other prospective threat to part of the company is acquisition of Docker itself, while it could mean a huge increase in enterprise penetration. In the future path the company and technology is moving forward in, there will be continued growth in usage and capabilities. The growth will maintain in the leading technology startups and companies of this kind, while the mid-size and larger corporate environments will continue to adopt and deploy at a slower pace.

A Question for You

I’ve put together what I’ve noticed, and I’d love to see things that you dear reader might notice about the Docker momentum machine. Do you see networking as a strength, other levels of virtualization, deployment of machines, integration or delivery, or some other part of this space as the way forward into the future. Let me know what your thoughts are on Twitter or whatever medium you feel like reaching out on. Of course, I’d also love to know if you think I’m wrong about anything I’ve written here.

_____100 |> F# Some Troubleshooting Linux

In the last article I wrote on writing a code kata with F# on OS-X or Windows, I had wanted to use Linux but things just weren’t cooperating with me. Well, since that article I have resolved some of the issues I ran into, and this is the log of those issues.

Issue 1: “How can I resolve the “Could not fix timestamps in …” “…Error: The requested feature is not implemented.””

The first issue I ran into with running the ProjectScaffold build on Linux I wrote up and posted to Stack Overflow titled “How can I resolve the “Could not fix timestamps in …” “…Error: The requested feature is not implemented.”“. You can read more about the errors I receiving on the StackOverflow Article, but below is the immediate fix. This fix should probably be added to any F# Installation instructions for Linux as part of the default.

First ensure that you have the latest version of mono. If you use the instructions to do a make and make install off of the fsharp.org site you may not actually have the latest version of mono. Instead, here’s a good way to get the latest version of mono using apt-get. More information can be found about this on the mono page here.

apt-get install mono-devel
apt-get install mono-complete

Issue 2: “ProjectScaffold Error on Linux Generating Documentation”

The second issue I ran into I also posted to Stack Overflow titled “ProjectScaffold Error on Linux Generating Documentation“. This one took a lot more effort. It also spilled over from Stack Overflow to become an actual Github Issue (323) on the project. So check out those issues in case you run into any issues there.

In the next issue, to be published tomorrow, I’ll have some script tricks to use mono more efficiently to run *.exe commands and get things done with paket and fake in F# running on any operating system.

______10 |> F# – Moar Thinking Functionally (Notes)

More notes on the “Thinking Functionally” series. Previous notes are @ “_______1 |> F# – Getting Started, Thinking Functionally“.

#6 Partial Application

Breaking down functions into single parameter functions is the mathematically correct way of doing it, but that is not the only reason it is done — it also leads to a very powerful technique called partial function application.

For example:

let add42 = (+) 42 // partial application
add42 1
add42 3

[1;2;3] |> List.map add42

let twoIsLessThan = (<) 2 // partial application twoIsLessThan 1 twoIsLessThan 3 // filter each element with the twoIsLessThan function [1;2;3] |> List.filter twoIsLessThan

let printer = printfn "printing param=%i"

[1;2;3] |> List.iter printer

Each case a partially applied function above it can then be reused in multiple contexts. It can also fix function parameters.

let add1 = (+) 1
let add1ToEach = List.map add1   // fix the "add1" function

add1ToEach [1;2;3;4]

let filterEvens =
   List.filter (fun i -> i%2 = 0) // fix the filter function

filterEvens [1;2;3;4]

Then the following shows plug in behavior that is transparent.

let adderWithPluggableLogger logger x y =
    logger "x" x
    logger "y" y
    let result = x + y
    logger "x+y"  result
    result 

let consoleLogger argName argValue =
    printfn "%s=%A" argName argValue 

let addWithConsoleLogger = adderWithPluggableLogger consoleLogger
addWithConsoleLogger  1 2
addWithConsoleLogger  42 99

let popupLogger argName argValue =
    let message = sprintf "%s=%A" argName argValue
    System.Windows.Forms.MessageBox.Show(
                                 text=message,caption="Logger")
      |> ignore

let addWithPopupLogger  = adderWithPluggableLogger popupLogger
addWithPopupLogger  1 2
addWithPopupLogger  42 99

Designing Functions for Partial Application

Sample calls to the list library:

List.map    (fun i -> i+1) [0;1;2;3]
List.filter (fun i -> i>1) [0;1;2;3]
List.sortBy (fun i -> -i ) [0;1;2;3]

Here are the same examples using partial application:

let eachAdd1 = List.map (fun i -> i+1)
eachAdd1 [0;1;2;3]

let excludeOneOrLess = List.filter (fun i -> i>1)
excludeOneOrLess [0;1;2;3]

let sortDesc = List.sortBy (fun i -> -i)
sortDesc [0;1;2;3]

Commonly accepted guidelines to multi-parameter function design.

  1. Put earlier: parameters ore likely to be static. The parameters that are most likely to be “fixed” with partial application should be first.
  2. Put last: the data structure or collection (or most varying argument). Makes it easier to pipe a structure or collection from function to function. Like:
    let result =
       [1..10]
       |> List.map (fun i -> i+1)
       |> List.filter (fun i -> i>5)
    
  3. For well-known operations such as “subtract”, put in the expected order.

Wrapping BCL Function for Partial Application

Since the data parameter is generally last versus most BCL calls that have the data parameter first, it’s good to wrap the BCL.

let replace oldStr newStr (s:string) =
  s.Replace(oldValue=oldStr, newValue=newStr)

let startsWith lookFor (s:string) =
  s.StartsWith(lookFor)

Then pipes can be used with the BCL call in the expected way.

let result =
     "hello"
     |> replace "h" "j"
     |> startsWith "j"

["the"; "quick"; "brown"; "fox"]
     |> List.filter (startsWith "f")

…or we can use function composition.

let compositeOp = replace "h" "j" >> startsWith "j"
let result = compositeOp "hello"

Understanding the Pipe Function

The pipe function is defined as:

let (|>) x f = f x

It allows us to put the function argument in front of the function instead of after.

let doSomething x y z = x+y+z
doSomething 1 2 3

If the function has multiple parameters, then it appears that the input is the final parameter. Actually what is happening is that the function is partially applied, returning a function that has a single parameter: the input.

let doSomething x y  =
   let intermediateFn z = x+y+z
   intermediateFn        // return intermediateFn

let doSomethingPartial = doSomething 1 2
doSomethingPartial 3
3 |> doSomethingPartial

#7 Function Associativity and Composition

Function Associativity

This…

let F x y z = x y z

…means this…

let F x y z = (x y) z

Also three equivalent forms.

let F x y z = x (y z)
let F x y z = y z |> x
let F x y z = x <| y z

Function Composition

Here’s an example

let f (x:int) = float x * 3.0  // f is int->float
let g (x:float) = x > 4.0      // g is float->bool

We can create a new function h that takes the output of “f” and uses it as the input for “g”.

let h (x:int) =
    let y = f(x)
    g(y)                   // return output of g

A much more compact way is this:

let h (x:int) = g ( f(x) ) // h is int->bool

//test
h 1
h 2

These are notes, to read more check out the Function Composition.