Welcome to GeekCap! This is the preview release of the GeekCap web site, so let me show you around a bit. This page is used for announcements (like this) and shows you what is new (below).
GeekCap is arranged into "Campuses", or communities if you like. Each campus presents a collection of three things for its topic area:
- Articles: just as the name suggests, articles educate you about some topic. Articles may be hosted here at GeekCap or might be links to other sites (I have over 400 articles up on InformIT.com and I'll point you to them rather than rewrite them!)
- Courses: courses are comprised of online classes, usually 20-30 minutes in length. The standard format is an MP4 video file that contains a slide presentation narriated by one of our instructors. Some classes are free and some cost education units, but we'll get into that later. You are free to pick and choose the classes that you want to take, courses are just a categorical grouping for you.
- Learning Tracks: if there is a topic that you're interested in learning, then these learning tracks may help you do so. The purpose of a learning track is to take a subject, like Spring RESTful Web Services, and summarize a list of articles, books, web resouces, classes, and so forth into a plan that you can follow to learn that topic.
Feel free to look around - I'd recommend starting by exploring a campus that interests you, such as the Java Campus (mine), and see the articles, courses, and learning tracks. I welcome your feedback at firstname.lastname@example.org.
Although I haven't been publishing much directly on GeekCap, I have been writing. There are links below to articles that I wrote about Vert.x, which has been called Node running in a JVM, about Akka and its use of the Actor model to develop highly-scalable concurrent code and about Storm, which provides Hadoop-like analysis but for live streaming data, for JavaWorld and then there are links to the three installments that I wrote about Hadoop for InformIT.com. The Hadoop article series walks you through installing and setting up a Hadoop environment, building the "Word Count" MapReduce application (the MapReduce equivalent to "Hello, World"), and thinking in MapReduce. The key to writing good Hadoop, and hence MapReduce, applications is learning to think about your problems in the MapReduce way. Part 3 in the series tries to show you just that!
The latest article that I published for JavaWorld is aboue Spring Data. In this article I review how to use Spring Data to manage persistence to a MongoDB document store. Along the way we review query capabilities both by naming convention and using QueryDSL to implement type-safe queries. If you're looking for an easy, yet robust, way to manage NoSQL data persistence, Spring Data may be just what you're looking for!
Open source Java projects: Spring Data (external link)
Posted By Steven Haines on Oct 22, 2013
Spring Data provides the boilerplate code and plumbing to enable you to interact with various NoSQL repositories in a Spring-consistent manner. Depending on your needs, you could even find the persistence logic for your entire application defined in a handful of Spring Data interfaces. Get started with Spring Data domain objects and repositories, then learn about two ways to implement Spring queries in Spring Data: by naming convention or using QueryDSL, which ensures type-safe queries that are validated at compile time.
Open Source Java Projects: Vert.x (external link)
Posted By Steven Haines on Jul 30, 2013
If you were excited about Node.js, Vert.x could be the next big thing for you: a similarly architected enterprise system that is built on the JVM. The latest installment of the JavaWorld Open source Java projects series introduces Vert.x with two hands-on examples based on the newly released Vert.x 2.0: First, build a simple Vert.x web server, then discover for yourself how the Vert.x event bus handles publish/subscribe and point-to-point messaging for effective enterprise integration.
Open source Java projects: Akka (external link)
Posted By Steven Haines on May 8, 2013
The actor model is a message-passing paradigm that resolves some of the major challenges of writing concurrent, scalable code for today's distributed systems. In this installment of Open source Java projects, Steven Haines introduces Akka, a JVM-based toolkit and runtime that implements the actor model. Get started with a simple program that demonstrates how an Akka message passing system is wired together, then build a more complex program that uses concurrent processes to compute prime numbers.
Posted By Steven Haines on Jan 23, 2013
In the third article in this series, I demonstrate how to build a meaningful Hadoop MapReduce application to analyze hourly website usage from a set of Apache HTTP Server logs. Learn how to analyze a business problem the MapReduce way and then how to structure key and value types to fit the MapReduce model.
Building a MapReduce Application with Hadoop (external link)
Posted By Steven Haines on Jan 16, 2013
As the amount of captured data increases over the years, so do our storage needs. Companies are realizing that “data is king,” but how do we analyze it? Through Hadoop. In the second article in this series, I explain what a MapReduce application is and how to build a simple one.
We haven't added any courses yet, but keep your eyes open, they will be coming soon!
We haven't added any learn tracks yet, but keep your eyes open, they will be coming soon!