Real time logs with Chrome dev tools and signalr part 2


This is the second post in a series talking about creating real time logging using chrome dev tools and real time communication libraries such as signalr. The first post focused on the server-side portion of the setup. This post will focus on creating the chrome devtools plugin which will display the logging information from the server.

About chrome plugins
If you know html/javscript/css, creating a chrome extension is actually really easy. The only gripe I have is there seems to be no way to inspect dev tools extension panels.But, you can get around that by sending errors from window.onerror and try/catch to the inspected window or background page console. Another thing to keep in mind is certain features will not work if you don’t have the appropriate permissions in the plugin configuration. I strongly suggest reading the chrome developer documentation for a better understanding of how devtools plugins work.

Creating the Plugin
I will start off with a layout of the plugin files in the file system and explain each file in a logical order.
Plugin layout

Plugin Manifest
This file tells chrome about the plugin and the various files it needs to work correctly.

{
  "manifest_version": 2,
  "name": "Real Time Logger",
  "description": "This extension allows applications to send logs to client without embedded scripts",
  "version": "1.0",
  "background":{
  	"persistent": true,
    "scripts": ["lib/jquery-2.0.3.js","lib/jquery.signalR-2.0.1.js","background.js"]
  },
  "permissions": [
    "tabs", "http://*/*", "https://*/*"
  ],
  "devtools_page": "devtools.html"
}

The “background” directive will instruct chrome to load an html page and include the three js files as scripts. Alternatively, you can create your own background.html and include the scripts yourself. The permissions control access to otherwise limited capabilities of the chrome extensions api. The devtools_page is where the plugin will create the panel used by the plugin to display the log information.

Background.js
This is the workhorse of the plugin. It will maintain all the connections to the server, receive the log messages and pass them out to the respective panels to be displayed.

var connectionlib = {
	signalr: function(){
		var connection;
		return {
			init: function(settings, handler){
					 var url = settings['baseurl'] + settings['url'];
					 connection = $.hubConnection(url, { useDefaultPath: false });
					 var proxy = connection.createHubProxy(settings['hub']);
					 proxy.on('onSql', function(sql) {
						handler(sql);
					 });
	
					 connection.start();
			},
			stop: function(){
				connection.stop();
			}
		}
	}
}




chrome.runtime.onConnect.addListener(function (port) {

     chrome.tabs.executeScript(parseInt(port.name),{ file: 'autodiscover.js' },function(result){

     	var options = result[0].split(";");
     	var settings = {};
     	for(var o in options){
     		var s = options[o].split('=');
     		settings[s[0]] = s[1];
     	}
     	
     	var lib = connectionlib[settings['library']]();
     	lib.init(settings,function(sql){
     		port.postMessage(sql);
     	});
     	
     });

   	  
      port.onDisconnect.addListener(function(p) {
    		lib.stop();
      });
 
});

The connectionlib object is just a simple way to handle support for multiple libraries. The listener function is where all the magic happens. For every dev tools panel which connects to it, it will attempt to detect if the inspected page supports real time logging and connect to it.

autodiscover.js
The background page will inject this code into the inspected window and if the it finds a meta tag with realtime logging configuration, it will send that configuration back to the background page.

var autoDiscover = document.querySelector('meta[name="real-time-log"][content]');
if(autoDiscover){
		autoDiscover.content + ';baseurl=' + window.location.protocol + '//'+ window.location.host
}

When I thought of ways the dev tools plugin could discover logging capabilities the first thing that came to my mind was meta tags. However, this can be achieved using custom headers or some other content in the page. Another option is to not use automatic discovery at all and opt for entering the url in the panel.

devtools.js
This code is very simple. All it does is create our logging panel when devtools opens.

chrome.devtools.panels.create("Real Time Log",
    "icon.png",
    "Panel.html",
    function(panel) {
      // code invoked on panel creation
    }
);

panel.js
This code will connect to the background page and wait for any incoming logs to output.

var log = document.getElementById('log');
var clear = document.getElementById('clear');

clear.addEventListener("click", function(){
	log.innerHTML = '';
});

var backgroundConnection = chrome.runtime.connect({
    name: ''+ chrome.devtools.inspectedWindow.tabId + ''
});

backgroundConnection.onMessage.addListener(function(sql){
	var li = document.createElement('pre');
	li.innerHTML =  hljs.highlight('sql', sql).value;
	log.appendChild(li);
});

panel.html
This page contains the elements the user can see an interact with in the devtools panel. The log element will display all log messages. Highlight will be used for syntax highlighting in the messages.

<html>
<head>
<link rel="stylesheet" href="lib/highlight/styles/xcode.css" />
<link rel="stylesheet" href="panel.css" />
</head>
<body>
<button id="clear">Clear</button>
<div id="log"></div>
<script src="lib/highlight/highlight.pack.js"></script>
<script src="panel.js"></script>
</body>
</html>

panel.css
This is some basic css for presenting the logs

pre {
	border-bottom:#cccccc 1px solid;
	padding-bottom:3px;
}

panel.css
This is some basic css for presenting the logs

pre {
	border-bottom:#cccccc 1px solid;
	padding-bottom:3px;
}

devtools.html
All this file does is include the devtools.js

<script src="devtools.js"></script>

What I have described so far in my two posts is really all you need for a basic implementation of this real time logging concept. You can download highlight.js from http://highlightjs.org/. I was only able to get the signalR client files by creating a dummy project and adding it to the project via nuget.

General Overview of the entire solution:
Real time plugin

The code in this post is a really basic get your hands dirty example. I created a github project which I will use to take the idea further. You are free to download the plugin, try it out and send pull requests if you wish. The project readme explains how to install and use the plugin.

C# datatables parser


The jQuery Datatables plugin is a very powerful javascript grid plugin which comes with the following features out of the box:

  • filtering
  • sorting
  • paging
  • jQuery ui themeroller support
  • plugins/extensions
  • Ajax/Remote and local datasource support

Setting up datatables on the client is very simple for basic scenarios. Here is an example of the markup and the initialization code.

<table id="PeopleListTable">
  <thead>
    <tr>
      <th>Name</th>
      <th>Age</th>
   </tr>
  </thead>
  <tbody>
    <tr>
      <td>John Doe</td>
      <th>25</th>
    </tr>
  </tbody>
</table>
$(function(){
  $('#PeopleListTable').dataTable();
});

Server Side Processing
The Datatables plugin supports loading table data, paging, sorting and filtering via ajax. Datatables sends a specific set of parameters which the server is expected to process and return the result in json format. Here is a sample of the request parameters sent via ajax:

sEcho:35
iColumns:7
sColumns:
iDisplayStart:0
iDisplayLength:10
mDataProp_0:FirstName
mDataProp_1:LastName
mDataProp_2:BirthDateFormatted
mDataProp_3:BirthDate
mDataProp_4:Weight
mDataProp_5:Height
mDataProp_6:Children
sSearch:
bRegex:false
sSearch_0:
bRegex_0:false
bSearchable_0:true
sSearch_1:
bRegex_1:false
bSearchable_1:true
sSearch_2:
bRegex_2:false
bSearchable_2:true
sSearch_3:
bRegex_3:false
bSearchable_3:true
sSearch_4:
bRegex_4:false
bSearchable_4:true
sSearch_5:
bRegex_5:false
bSearchable_5:true
sSearch_6:
bRegex_6:false
bSearchable_6:true
iSortCol_0:1
sSortDir_0:asc
iSortingCols:1
bSortable_0:true
bSortable_1:true
bSortable_2:true
bSortable_3:true
bSortable_4:true
bSortable_5:true
bSortable_6:true

For a detailed description of each parameter please see the datatables.net documentation

mDataProp_n Parameters
Datatables supports displaying columns in any order in the table by setting the mProp property of a column to a specific property in the json result array. For each column, it sends a parameter in the format ‘DataProp_columnIndex = propertyName’. As we can see in our example above, FirstName is the mProp of the first column in the table. It is important to understand these column index property mappings because the sorting and filtering parameters rely on them being interpreted properly.

Sorting
Datatables has a global setting called bSort which disables sorting for the entire table. It also has a property called bSortable which enables/disables sorting for a specific column. For each column, the server side script should search for a parameter in the format ‘bSortable_columnIndex = true/false’. Sorting is determined by parameters with the formats ‘iSortCol_sortCount = columnindex’ and ‘sSortDir_sortCount = asc’ where ‘sortCount’ is the order of sorted parameters and ‘asc’ is the direction that the specific column should be sorted.

Filtering
Datatables has a global setting called bFilter which disables filtering for the entire table. It also has a property called bSearchable which enables/disables filtering for a specific column. For each column, the server side script should search for a parameter in the format ‘bSearchable_columnIndex = true/false’. Filtering works by searching all the searchable columns in a row for any value which contains the filter value in the format ‘sSearch = findMe’. There is also support for filtering on specific columns by using the parameters in the format ‘sSearch_columnIndex = findMe’.

The c# Datatables Processor
The parser is a generic class with implements most of the server side features of the Datatables plugin in a reusable manner with special emphasis on performance. For example, an application which requires grids for people, cities and shopping lists does not require special logic for sorting and filtering each entity type because Datatables dynamically generates the expressions required to support these functions. If our first client side example was configured to use server side processing it would probably look like this:

 <table id="PeopleListTable"></table>
        $(function () {
            var peopleList = $('#PeopleListTable').dataTable({
                bServerSide: true,
                bProcessing: true,
                sServerMethod: "POST",
                sAjaxSource: "@Url.Action("All", "Person")",
                aoColumns: [
                    { mData: "FirstName", sTitle: "First Name" },
                    { mData: "LastName", sTitle: "Last Name"}
                ]
            });
        });
public JsonResult All()
{
    var context = new PeopleEntities();
    var parser = new DataTablesParser<Person>(Request, context.People);

    return Json(parser.Parse());
}

With the above combination of markup, javascript and 3 lines of server side code you have the ability to render a very rich and responsive grid in little time.

Entity Framework Performance
The parser supports two separate scenarios which are determined by the provider of the Iqueryable supplied to its constructor; The simple case where all/most processing is handled in memory via Linq to Objects and the more complex case where most/all processing is handled on the database server via Linq to SQL. In linq to sql support we ensure all the expressions sent to entity framework are translatable to valid tsql statements. The goal here is to avoid the cost of bringing most/all the data across the wire and into memory for processing. Imagine a grid for a dataset with 2 million records where you pull in all 2 million records from the database only to send 10 to the client.

As an example the following sql statement should be the result of the request it precedes. All the sorting, filtering and paging parameters have been translated and are represented in the tsql statement.

  SELECT TOP (10) [Filter1].[Id] AS [Id], 
                 [Filter1].[FirstName] AS [FirstName], 
                 [Filter1].[LastName] AS [LastName]
 FROM ( SELECT [Extent1].[Id] AS [Id], 
               [Extent1].[FirstName] AS [FirstName], 
               [Extent1].[LastName] AS [LastName], 
               row_number() OVER (ORDER BY [Extent1].[FirstName] ASC) AS [row_number]
               FROM [dbo].[People] AS [Extent1]
               WHERE ([Extent1].[FirstName] LIKE N'%john%') 
                     OR ([Extent1].[LastName] LIKE N'%john%')\r\n)  AS [Filter1]
 WHERE [Filter1].[row_number] > 0
 ORDER BY [Filter1].[FirstName] ASC
sEcho:35
iColumns:7
sColumns:
iDisplayStart:0
iDisplayLength:10
mDataProp_0:FirstName
mDataProp_1:LastName
sSearch: john
iSortCol_0:1
sSortDir_0:asc

The ‘iDisplayStart’ property determines the start of a page of data and iDisplayLength determines the length of each page of data.

Where is X feature?
The biggest feature missing from the parser is processing individual search filters. Originally, the individual property search and the generic search were implemented as two separate functions. However, I am convinced that the bulk of the logic in the generic search can be generalized to also handle the individual property such. I am open to any ideas on this one. I have also been asked about sorting/filtering on sub properties. This should be possible in linq to objects but I have not been able to look into it.

Conclusion
The parser is definitely a work in progress in the sense that it is always being improved whenever possible but it certainly saves time when using the datatables plugin for grids.

The parser can be added to your project via npm using the following command:

PM> Install-Package DataTablesParser

Please note that the NPM version up until the publishing of this post does not have the most up to date fixes and changes. I plan to update the NPM package as soon as the new changes have been thoroughly tested.

You can get the latest code or send pull requests at the github repository here:
https://github.com/garvincasimir/csharp-datatables-parser

Extension method for converting generic lists to CSV in C# [Updated]


Over a year ago I published a post which showed a c# extension method which can be use to convert a generic  list to CSV. Recently, I came across a problem in the code. It does not handle Nullable<T> properly. The following is an update to the code which fixes the Nullable<T> issues. I also added a cheap option which converts camel case headers to distinct words.

public static string ToCSV<T>(this IEnumerable<T> list, bool showheader = true, bool processHeaders = false)
 {
 var type = typeof(T);
 var properties = type.GetProperties();

//Setup expression constants
 var param = Expression.Parameter(type, "val");
 var doublequote = Expression.Constant("\"");
 var doublequoteescape = Expression.Constant("\"\"");
 var comma = Expression.Constant(",");

//Convert all properties to strings, escape and enclose in double quotes
 var propq = (from prop in properties
 let tostringcall = Expression.Call(typeof(Convert).GetMethod("ToString", new Type[] { typeof(object) }), Expression.Convert( Expression.Property(param, prop), typeof(object)))
 let replacecall = Expression.Call(tostringcall, typeof(string).GetMethod("Replace", new Type[] { typeof(String), typeof(String) }), doublequote, doublequoteescape)
 select Expression.Call(typeof(string).GetMethod("Concat", new Type[] { typeof(String), typeof(String), typeof(String) }), doublequote, replacecall, doublequote)
 ).ToArray();

//Convert an instance of the object to a single csv line
 var concatLine = propq[0];
 for (int i = 1; i < propq.Length; i++)
 concatLine = Expression.Call(typeof(string).GetMethod("Concat", new Type[] { typeof(String), typeof(String), typeof(String) }), concatLine, comma, propq[i]);

var method = Expression.Lambda<Func<T, String>>(concatLine, param).Compile();

if (showheader)
 {
 //Create header row
 var header = String.Join(",", properties.Select(p => processHeaders ? Regex.Replace(p.Name, "(\\B[A-Z])", " $1").Trim() : p.Name).ToArray());

return header + Environment.NewLine + String.Join(Environment.NewLine, list.Select(method).ToArray());
 }
 else
 {
 return String.Join(Environment.NewLine, list.Select(method).ToArray());
 }
 }

Extension method for converting generic lists to CSV in C#


I created the following class so I could easily convert a generic list to a CSV string.  This may be handy when you want to quickly export a moderately sized result set to Microsoft Excel.

    public static class CsvConverter
    {
        public static string ToCSV<T>(this IEnumerable<T> list)
        {
            var type = typeof(T);
            var props = type.GetProperties();

            //Setup expression constants
            var param = Expression.Parameter(type, "x");
            var doublequote = Expression.Constant("\"");
            var doublequoteescape = Expression.Constant("\"\"");
            var comma = Expression.Constant(",");

            //Convert all properties to strings, escape and enclose in double quotes
            var propq = (from prop in props
                         let tostringcall = Expression.Call(Expression.Property(param, prop), prop.ReflectedType.GetMethod("ToString",new Type[0]))
                         let replacecall = Expression.Call(tostringcall, typeof(string).GetMethod("Replace", new Type[] { typeof(String), typeof(String) }), doublequote, doublequoteescape)
                         select Expression.Call(typeof(string).GetMethod("Concat", new Type[] { typeof(String), typeof(String), typeof(String) }), doublequote, replacecall, doublequote)
                         ).ToArray();

            var concatLine = propq[0];
            for (int i = 1; i < propq.Length; i++)
                concatLine = Expression.Call(typeof(string).GetMethod("Concat", new Type[] { typeof(String), typeof(String), typeof(String) }), concatLine, comma, propq[i]);

            var method = Expression.Lambda<Func<T, String>>(concatLine, param).Compile();

            var header = String.Join(",", props.Select(p => p.Name).ToArray());

            return header + Environment.NewLine + String.Join(Environment.NewLine, list.Select(method).ToArray());
        }
    }

Adding this as an extension method for IEnumerable may not be the best thing since the above will fail if T is an object with no properties. However, you probably wouldn’t be converting something like a List of string to CSV anyways. If you are then you can simply use the following code

var csv = string.join(",",list.ToArray());  

The following console application sample demonstrates how the new method can be used. I hope someone finds this useful.

   class Program
    {
        static void Main(string[] args)
        {
            var list = new List<person>();
            var limit = 100;

            for (int x = 0; x < limit; x++)
            {
                var fname = "Ron";
                var lname = "Obvious";

                list.Add(new person()
                {
                    Age = x,
                    FirstName = fname,
                    LastName = lname
                });

            }
         
            var csv = list.ToCSV();

            Console.Write(csv);
            Console.ReadLine();

        }
    }

    class person
    {
        public int Age { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }