Moving data from SQL Server to MongoDB/CouchDB/Couchbase

I must have re-written the first sentence for this blog 20 times. The reason is that I realize the audience for this post must be incredibly small. You have to work somewhere that is using SQL Server. You also have to work somewhere that is using MongoDB or a Couch product (CouchDB, Couchbase).

Even further you have to want to move data from SQL Server to a document database that stores its data as JSON. But who knows, maybe someone will find this approach useful. =)

The goal is to take data from SQL Server (relational) to MongoDB (document). I’m assuming since you are running SQL Server that you have some .NET languages around the office.

To start, make a new Visual Studio solution and create a Visual C# SQL CLR Database Project. You can find this under the > Database grouping of project types.

The basic idea is that you want to write code, deploy it to SQL Server and then run the code as a stored procedure that produces JSON.

Example of code:

using System.Collections.Generic;
using System.Data;
using Microsoft.SqlServer.Server;
using POC;
using POC.PageElements;

public partial class StoredProcedures
    public static void StoredProcedure1()
        Builder builder = new Builder();
        var articles = builder.Build<Article>("exec usp_get_articles");

        foreach (var article in articles)
            article.Contributors = builder.Build<Contributor>("exec usp_get_article_contributors " + article.ArticleId);
            foreach (var contributor in article.Contributors)
                contributor.SocialLinks = builder.Build<SocialLink>("select * from contributor_social_link where contributor_id = " + contributor.Id);

            article.Categories = builder.Build<Category>("exec usp_get_article_categories " + article.ArticleId);

            article.Pages = builder.Build<Page>("exec usp_get_article_pages " + article.ArticleId);
            foreach (var page in article.Pages)
                var paragraphs = builder.Build<Paragraph>("exec usp_get_paragraphs " + page.Id);
                foreach (var paragraph in paragraphs)
                    var images = builder.Build<Image>("exec usp_get_paragraph_image " + paragraph.Id);
                    paragraph.Image = (images.Count > 0) ? images.ToArray()[0] : null;

                var tables = builder.Build<Table>("exec usp_get_tables " + page.Id);
                foreach (var table in tables)

                var imageGroups = builder.Build<ImageGroup>("exec usp_get_image_groups " + page.Id);
                foreach (var imageGroup in imageGroups)
                    var images = builder.Build<Image>("exec usp_get_image_group_images " + imageGroup.Id);
                    imageGroup.Images = (images.Count > 0) ? images : new List<Image>();

                var inlineImages = builder.Build<Image>("exec usp_get_inline_images " + page.Id);
                foreach (var inlineImage in inlineImages)

                var videos = builder.Build<Video>("exec usp_get_videos " + page.Id);
                foreach (var video in videos)

                var blockQuotes = builder.Build<BlockQuote>("exec usp_get_block_quotes " + page.Id);
                foreach (var blockQuote in blockQuotes)

        SqlPipe sp;
        sp = SqlContext.Pipe;

        var dataRecord = new SqlDataRecord(
            new SqlMetaData("id", SqlDbType.Int),
            new SqlMetaData("json", SqlDbType.Text));

        foreach (var article in articles)
            dataRecord.SetInt32(0, article.Id);
            dataRecord.SetString(1, article.ToString());


Some of the code calls functions that I haven’t included, but you can use your imagination to understand what they do. They simply run queries that populate a list of objects. We keep constructing a very rich Article object by making one database call after another.

Once we’re done, we have a list of articles that have been populated from the database. Remember the code above is actually deployed as executable code in SQL Server. Inside of Management Studio you would navigate to your database and then Programmability->Assemblies. If deployment goes well, you’ll see your code there.

The end of the code blob above shows that we are calling article.ToString(); The contents of this call:

        public override string ToString()
            return JsonConvert.SerializeObject(this);

As you can see we’re simply serializing the article to JSON using the Newtonsoft JSON library for C#.

I will say that the deployment of code to SQL Server (especially third party libraries) is not easy. It’s possible, but it will take some Googling and investigation.

The benefits of being able to produce JSON from SQL Server is massive however. You can quickly change the objects in C# that get serialized to JSON, which helps development velocity. My use case has each article running roughly 13 queries, which means that the total number of queries for 1,000 articles would be 13,000!

However in my testing I was able to produce around 100,000 articles in less than 10 minutes. The secret sauce is the fact that the code is running on the database machine and so we don’t have to pay the price of network latency.

Hopefully this approach is helpful in sending non-trivially formed data from SQL Server to MongoDB!

Moving data from SQL Server to MongoDB/CouchDB/Couchbase

Cross Database Joins with Groovy/Gremlin

It’s funny how things connect to each other. The other day I was watching a video about a graph database, which lead me to the Titan project, which lead me to this blog post:

Our database environment at work is definitely a polyglot. It’s a mixture of SQL Server, Postgres, CouchDB, MongoDB, MySQL, Couchbase and a few others. Most of the time these databases do not need to talk directly. Each one is usually a silo and fairly self contained.

However there are a few strong relationships between databases. MySQL talks to Mongo, and SQL Server talks to Mongo. They have a lot of data in common.

The punchline of the post is that Gremlin ( is a Groovy based REPL that allows us to easily talk to heterogenous databases.

Obviously we could write pure Java to do this, but Groovy with Gremlin makes it a joy, not a chore.

For example, here is some code to connect to SQL Server (assuming you have the SQL JDBC jar file):

import groovy.sql.Sql

def getArticles = {
        sql = Sql.newInstance("jdbc:sqlserver://;databaseName=cms", "login","password", "")

        def map = new HashMap()

        sql.eachRow("select article_id, title from article") {
                map[it.article_id] = title

        return map

Very simple. Here is some code in the same script to connect to Mongo and get some data:

def getMongoDatabase = {
        def mongo = new GMongo("")
        return mongo.getDB("articles")

def getExistingMongoArticleIds = { db ->
        def cursor = db.articles.find([:], [_id : 1]).toArray()
        def article_ids = []

        cursor.each { item ->

        return article_ids

The beauty of this is that as our company moves towards more and more JSON (we use it in our services, our code and our databases), Groovy provides a concise syntax and is easy to use.

Granted all the data has to come back to the client to do cross-system joins, but with some creativity, the performance impacts can be minimized.

Cross Database Joins with Groovy/Gremlin