Sunday, April 03, 2016

A trial port from Netty 3 to Netty 4.1, Part 2 of ?

Finally got the UI up and running. 

I was actually surprised to see it since I have not seen the new look before....

2016-04-03 15:49:50,254 INFO  [EpollServerWorkerThread#4] RpcHandler: Handling message [AggregatedFullHttpRequest]
2016-04-03 15:49:50,262 INFO  [EpollServerWorkerThread#4] HttpQuery: [id: 0x4cc79a07, L:/ - R:/] HTTP /aggregators done in 7ms
2016-04-03 15:49:50,282 INFO  [EpollServerWorkerThread#4] RpcHandler: Handling message [AggregatedFullHttpRequest]
2016-04-03 15:49:50,284 INFO  [EpollServerWorkerThread#4] HttpQuery: [id: 0x4cc79a07, L:/ - R:/] HTTP /s/gwt/opentsdb/images/corner.png done in 1ms
2016-04-03 15:49:50,285 INFO  [EpollServerWorkerThread#2] RpcHandler: Handling message [AggregatedFullHttpRequest]
2016-04-03 15:49:50,285 INFO  [EpollServerWorkerThread#3] RpcHandler: Handling message [AggregatedFullHttpRequest]
2016-04-03 15:49:50,286 INFO  [EpollServerWorkerThread#2] HttpQuery: [id: 0xabbb13cb, L:/ - R:/] HTTP /s/gwt/opentsdb/images/hborder.png done in 1ms

2016-04-03 15:49:50,287 INFO  [EpollServerWorkerThread#3] HttpQuery: [id: 0x4857bc87, L:/ - R:/] HTTP /s/gwt/opentsdb/images/vborder.png done in 2ms

I had a bit of trouble getting the static content server to work properly. Not sure what's going on, but after the first serve, all remaining requests seemed to stall and the browser reported no content delivered. It seemed to have something to do with the use of FileRegions, so I tried swapping the OpenTSDB file serving code with Netty's HttpStaticFileServerHandler, but I saw the same issues.

Eventually, I decided to bypass the problem by just loading the content up int ByteBufs and writing the bufs out. Not the most optimal, but I'll come back round to it soon.

Here's some more conversion steps:

A lot of these:

  • query.method().getName()  ---> query.method().name()
  • channel.isConnected() ---> channel.isOpen()
  • HttpHeaders.Names.CONTENT_TYPE  --->  HttpHeaderNames.CONTENT_TYPE
A new class, HttpUtil handles the setting of some headers, but not all, so for example, this seems to be the most terse way of setting the content length in a response:   HttpUtil.setContentLength(response, buf.readableBytes());

Most Netty 4.1 sample code I have read uses ChannelHandlerContext.write(...) and ChannelHandlerContext.writeAndFlush(...) and not the same methods in Channel, although as far as I can tell, they perform the exact same function. Not sure if there is a case to be made to use one over the other, but I converted the OpenTSDB HttpQuery constructs to use ChannelHandlerContext.

The OpenTSDB PipelineFactory continues to need tweaking. Following up on the changes in Netty's HTTP codecs, specifically the new breakout of HttpRequest into seperate parts, I am no longer sure it makes sense for OpenTSDB to not always have an HTTP object aggregator. Right now, OpenTSDB does not accept chunked requests unless the default is overridden. Problem is, Netty 4 assumes that that either you or an HttpContentAggregator handler will piece the full request back together,  so if there are no ill side effects, I propose request aggregation become a permanent fixture. The HTTP switch now looks like this:

Next up is testing all the RPCs.

No comments: