1887 crude mortality rate in Spain using classInt package

TBM_1887 jenks
Crude Mortality Rate in Spain, 1887 Census

TBM_1887 quantile TBM_1887 bclust TBM_1887 fisher

>nclassint <- 5 #number of colors to be used in the palette
>cat <- classIntervals(dt$TBM, nclassint,style = "jenks")
>colpal <- brewer.pal(nclassint,"Reds")
>color <- findColours(cat,colpal) #sequential
>bins <- cat$brks
>lb <- length(bins)
>cat

style: jenks
[20.3,25.9] (25.9,30.5] (30.5,34.4] (34.4,38.4] (38.4,58.2]
68         114         130         115          35

Save the categories into a data.frame (dat)

type first second third fourth fifth
1 quantile    91     93    92     91    95
2       sd    10    202   244      5     0
3    equal   100    246   113      2     1
4   kmeans    68    115   142    118    19
5    jenks    68    114   130    115    35
6   hclust   100    174   153     34     1
7   bclust    53    120   275     13     1
8   fisher    68    114   130    115    35

and melt it into a long format (required by ggplot):

dat1 <- melt(dat,id.vars=c("type"),value.name="n.breaks")

ggplot(dat1,aes(x=variable,y=n.breaks,fill=type))+
geom_bar(stat="identity", position=position_dodge())

Rplot

A match made in R: checking the order of geographical areas in shape files and in your data frames

Not every shape file is as nice as those provided in libraries. Sometimes we have to deal with historical maps, which have been hand-drawn, re-touched and what not. To work with geo-referenced data it is essential to have a variable in both shape file and dataframe with unique coding that has exactly the same number of areas and the same ordering in both files.

A quick way to check if shapefile and dataframe have the same number of areas:

nrow(df) == length(shape.file$Code)

In the shapefile, one can also select a couple of areas big enough so that they can easily be located, and plot them as “control” areas.
For instance, I want to select the area with code “15078” in the shapefile:
>which(shape.file$Code=="15078",arr.Ind=T)
[1] 271

which is the area in the 271-th position (same way shape.file$Code[271] gives the code of area 271).
plot(shape.file)
plot(shape.file[c(271,898),],col="red",border="red",add=T)

this is an easy way to locate your “control” area(s).
Rplot
Ideally, you should have some variable that is identical to the one in the shapefile, a codification of some sort, providing a unique Code, the name of the area or some factors that allow you to locate the area in space.

An easy way to check if both shape file and data frame have the same ordering of geographical areas is to test it:
>code.sh <- cbind(c(1:length(shape.file$Code)),as.vector(shape.file$Code))
>code.df <- cbind(c(1:nrow(df)),df$Code)
>code.df==code
.sh
[,1]  [,2]
[1,] TRUE  TRUE
[2,] TRUE  TRUE
[3,] TRUE  TRUE

What if it’s not?
First option: the inelegant solution
Manually change the order of the areas in a csv file according to the exact order they have in the shape file. It’s easy as you can create an ordinal index for the shapefile codes, paste it in excel, and assign it with a vlookup function.
Second option: the smart R match
In R there is a function called match that returns a vector of the positions of first matches of the first argument in its second:
>my.match <- match(df$Code, shape.file$Code)
NB: to use match the two variables providing the code for the areas have to have the very same unique and identical codes, or else funny stuff happens. To check that everything is in its right place, you can plot the two “control” spatial polygons we chose in the beginning, using their position in the dataframe rather than in the shapefile:
>plot(shape.file)
>plot(shape.file[c(which(df$Code=="305"),which(df$Code=="15078")),],col="orange",add=T)

Game of Thrones maps in R…

The map of GOT world with rivers, roads, lakes, the Wall, and main cities:

GOT_map

Neighborhood relations according to Sphere of Influence pretty much coincide with roads and rivers (package spdep):

GOT_neigh

Paste some images to locate the (surviving) Stark family members, using rasterImage from the png library:

GOT_Stark

Creating neighborhood matrices for Spatial Polygons in R (updated)

One of the first steps in spatial analysis is to create a neighborhood matrix, that is to say create a relationship/connection between each and (ideally!) every polygon. Why? Well, given that the premise for spatial analysis is that neighboring locations are more similar than far away locations, we need to define what is “near”, a set of neighbors for each location capturing such dependence.

There are many ways to define neighbors, and usually, they are not interchangeable, meaning that one neighborhood definition will capture spatial autocorrelation differently from another.

In R the package spdep allows to create a neighbor matrix according to a wide range of definitions: contiguity, radial distance, graph based, and triangulation (and more). There are 3 main and most used neighbors:

A) Contiguity based of order 1 or higher (most used in social sciences)

B) Distance based

C) Graph based

Install and load the maptools and spdep libraries shapefile from North Carolina counties:

library(maptools)
library(spdep)
NC= readShapePoly(system.file("shapes/sids.shp", package="maptools")[1], IDvar="FIPSNO", proj4string=CRS("+proj=longlat +ellps=clrk66"))

A. Contiguity based relations

are the most used in the presence of irregular polygons with varying shape and surface, since contiguity ignores distance and focuses instead on the location of an area. The function poly2nb allows to create 2 types of contiguity based relations:

1. First Order Queen Contiguity

FOQ contiguity defines a neighbor when at least one point on the boundary of one polygon is shared with at least one point of its neighbor (common border or corner);

nb.FOQ = poly2nb(NC, queen=TRUE, row.names=NC$FIPSNO)
#row.names refers to the unique names of each polygon
nb.FOQ
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 490
## Percentage nonzero weights: 4.9
## Average number of links: 4.9

Calling nb.FOQ you get a summary of the neighbor matrix, including the total number of areas/counties, and average number of links.

2. First Order Rook Contiguity

FOR contiguity does not include corners, only borders, thus comprising only polygons sharing more than one boundary point;

nb.RK = poly2nb(NC, queen=FALSE, row.names=NC$FIPSNO)
nb.RK
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 462
## Percentage nonzero weights: 4.62
## Average number of links: 4.62

NB: if there is a region without any link, there will be a message like this:
Neighbour list object:
Number of regions: 910
Number of nonzero links: 4620
Percentage nonzero weights: 0.5924405
Average number of links: 5.391209
10 regions with no links:
1014 3507 3801 8245 9018 10037 22125 30005 390299 390399

where you can identify the regions with no links (1014, 3507,…) using which(…), and in R it is possible to “manually” connect them or change the neighbor matrix so that they can be included (such as graph or distance based neighbors).
Sometimes, it also happens that some polygons that have been retouched (sounds like a blasphemy but it happens a lot with historical maps) may not recognize shared borders. This is when manually setting up neighbors comes in handy (you can’t do that in Geoda).

Contiguity

Higher order neighbors are useful when looking at the effect of lags on spatial autocorrelation and in spatial autoregressive models like SAR with a more global spatial autocorrelation:

nb.SRC = nblag(nb.RK,2) #second order rook contiguity
nb.SRC
## [[1]]
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 490
## Percentage nonzero weights: 4.9
## Average number of links: 4.9
##
## [[2]]
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 868
## Percentage nonzero weights: 8.68
## Average number of links: 8.68
##
## attr(,"call")
## nblag(neighbours = nb.RK, maxlag = 2)

Contiguity2

B. Distance based neighbors

DBN defines a set of connections between polygons either based on a (1) defined Euclidean distance between centroids dnearneigh or a certain (2) number of neighbors knn2nb (e.g. 5 nearest neighbors);

coordNC = coordinates(NC) #get centroids coordinates
d05m = dnearneigh(coordNC, 0, 0.5, row.names=NC$FIPSNO)
nb.5NN = knn2nb(knearneigh(coordNC,k=5),row.names=NC$FIPSNO) #set the number of neighbors (here 5)
d05m
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 430
## Percentage nonzero weights: 4.3
## Average number of links: 4.3
nb.5NN
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 500
## Percentage nonzero weights: 5
## Average number of links: 5
## Non-symmetric neighbours list

a little trick: if you want information on neighbor distances whatever the type of neighborhood may be:

distance = unlist(nbdists(nb.5NN, coordNC))
distance
##   [1] 0.3613728 0.3693554 0.3864847 0.2766561 0.5168459 0.3709748 0.2607982
##   [8] 0.3232974 0.4376632 0.2862144 0.5773310 0.3778483 0.4463538 0.2914539
## ...
## [498] 0.3407192 0.3995114 0.1838115

Distance

C. Graph based (I’ve never used them, but it’s good to know that they exist)

Delauney triangulation tri2nb constructs neighbors through Voronoi triangles such that each centroid is a triangle node. As a consequence, DT ensures that every polygon has a neighbor, even in presence of islands. The “problem” with this specification is that it treats our area of study as if it were an island itself, without any neighbors (as if North Carolina were an island with no Virginia or South Carolina)… Therefore, distant points that would not be neighbors (such as Cherokee and Brunswick counties) become such;
Gabriel Graph gabrielneigh is a particular case of the DT, where a and b are two neighboring points/centroids if in the circles passing by a and b with diameter ab does not lie any other point/centroid;
Sphere of Influence soi.graph: twopoints a and b are SOI neighbors if the circles centered on a and b, of radius equal to the a and b nearest neighbour distances, intersect twice. It is a sort of Delauney triangulation without the longest connections;
Relative Neighbors relativeneigh is a particular case of GG. A border belongs to RN if the intersection formed by the two circles centered in a and b with radius ab does not contain any other point.

delTrinb = tri2nb(coordNC, row.names=NC$FIPSNO) #delauney triangulation
summary(delTrinb)
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 574
## Percentage nonzero weights: 5.74
## Average number of links: 5.74
## Link number distribution:
##
##  2  3  4  5  6  7  8  9 10
##  1  2 13 29 27 22  3  1  2
## 1 least connected region:
## 37039 with 2 links
## 2 most connected regions:
## 37005 37179 with 10 links
GGnb = graph2nb(gabrielneigh(coordNC), row.names=NC$FIPSNO) #gabriel graph
summary(GGnb)
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 204
## Percentage nonzero weights: 2.04
## Average number of links: 2.04
## 20 regions with no links:
## 37109 37131 37137 37141 37145 37147 37151 37159 37161 37165 37173 37175 37179 37183 37185 37187 37189 37195 37197 37199
## Non-symmetric neighbours list
## Link number distribution:
##
##  0  1  2  3  4  5  6  7
## 20 27 16 15 13  7  1  1
## 27 least connected regions:
## 37047 37053 37055 37075 37091 37105 37107 37113 37115 37117 37119 37121 37129 37133 37135 37139 37143 37149 37153 37155 37157 37163 37167 37177 37181 37191 37193 with 1 link
## 1 most connected region:
## 37057 with 7 links
SOInb = graph2nb(soi.graph(delTrinb, coordNC), row.names=NC$FIPSNO) #sphere of influence
summary(SOInb)
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 470
## Percentage nonzero weights: 4.7
## Average number of links: 4.7
## Link number distribution:
##
##  1  2  3  4  5  6  7  9
##  1  5 12 26 30 15 10  1
## 1 least connected region:
## 37031 with 1 link
## 1 most connected region:
## 37097 with 9 links
RNnb = graph2nb(relativeneigh(coordNC), row.names=NC$FIPSNO) #relative graph
summary(RNnb)
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 133
## Percentage nonzero weights: 1.33
## Average number of links: 1.33
## 31 regions with no links:
## 37047 37053 37097 37107 37109 37115 37131 37137 37141 37143 37145 37147 37151 37155 37159 37161 37163 37165 37167 37173 37175 37179 37183 37185 37187 37189 37191 37193 37195 37197 37199
## Non-symmetric neighbours list
## Link number distribution:
##
##  0  1  2  3  4
## 31 30 18 17  4
## 30 least connected regions:
## 37009 37027 37031 37035 37037 37039 37055 37073 37075 37083 37091 37095 37105 37113 37117 37119 37121 37125 37127 37129 37133 37135 37139 37149 37153 37157 37169 37171 37177 37181 with 1 link
## 4 most connected regions:
## 37001 37003 37059 37079 with 4 links

GraphBased

What to do with all this stuff? …

compute and compare global Moran’s I
LISA maps
Variograms and correlograms
…?

Quick way to add annotations to your ggplot graphs

Lately, some of the graphs I have been working on have “strange/erratic” values, so I thought to plot those values in a different color and rather than adding an extra legend line I have decided to add a note to explain the difference. Among the many options available, I have found a very quick and harmless way to add annotations to ggplot graphs. It uses the library “gridExtra”, which employs user-level functions that work with “grid” graphics and draw tables.

1.load ggplot2 library
library(ggplot2)
2. and save your graph as “my_graph”:
my_graph<- qplot(wt, mpg, data = mtcars)
3. load gridExtra and add the text to the graph. Note that x, hjust and vjust give the position of the text in the outer margins. If you want to annotate INSIDE the graph, use annotate:
library(gridExtra)
g <- arrangeGrob(p, sub = textGrob("I pledge my life and honor to the Night's Watch, \nfor this night and all the nights to come.", x=0, hjust=-0.1, vjust=0.1,gp = gpar(fontface = "italic", fontsize = 10)))

5. save the graph
ggsave("my_graph_with_note.pdf", g, width=5,height=5)

my_graph_with_note

 

 

 

 

 

here is the graph I have been working on:

MORAN_I_MAC

Valar Morghulis: Some charts using GOT (tv-show) deaths

Drawing from one of the most important demographic laws, Valar Morghulis (all men must die), here is a simple summary of the deadly happenings in four seasons of GOT as reported by the Washington Post.

Let’s start by the total number of (portrayed) deaths by season:

df1 ggplot(df1,aes(x=factor(Series),y=Total))+
geom_bar(stat="identity",fill=c("yellow","orange","red","brown"))+
xlab("Season number")+
ylab("Total number of deaths")

Number of deaths by season box-plot
Number of deaths by season

ggplot(df1,aes(x=Series,y=Total))+
geom_line(lwd=2)+
xlab("Season number")+
ylab("Total number of deaths")

Number of deaths by season

by location in Westeros:

df2 Location=c("King's Landing","Beyond the Wall","Castle Black","The Twins","The Riverlands")
ggplot(df2,aes(x=factor(Location),y=Deaths))+
geom_bar(stat="identity",fill=c("lightblue","black","brown","darkseagreen","red"))+
ylab("Total number of deaths")+
xlab("")+
theme(axis.text=element_text(size=15))

Number of deaths by location

by method of death:
df3 Method=c("Animal","Animal Death","Arrows","Axe","Blade","Bludgeon","Crushing","Falling","Fire","Hands","HH item","Mace","Magic","Other","Poison","Spear","Unknown")
df3.1 df3.2 ggplot(df3.2,aes(x=factor(Method),y=value,fill=variable))+
geom_bar(stat="identity")+
ylab("")+
xlab("")+
theme(axis.text.x=element_text(size=15,angle=45))+
scale_fill_discrete(name ="Method of Death", labels=c("Season 1", "Season 2", "Season 3", "Season 4"))

Number of deaths by method
and lastly by House allegiance:
df4 House df4.1 df4.2 ggplot(df4.2,aes(x=reorder(factor(House),value),y=value,fill=variable))+
geom_bar(stat="identity")+
ylab("")+
xlab("")+
theme(axis.text.x=element_text(size=15,color="black"),
axis.text.y=element_text(size=15,color="black"))+
scale_fill_discrete(name ="House Allegiance", labels=c("Season 1", "Season 2", "Season 3", "Season 4"))+
coord_flip()

Number of deaths by house

Pyramid-like bar chart for climate change barriers

I was scrolling through the Independent and got hooked on a graph displaying the percentage of people’s concerns regarding climate change by country, and was extremely surprised by the results. UK and US lag far behind countries including China in wanting their governments to pursue a meaningful commitment to successfully address climate change.

newplot

library(ggplot2)
library(grid)
library(plyr)

dta<-
structure(list(country = structure(c(15L, 3L, 4L, 5L, 14L, 6L,
10L, 12L, 1L, 2L, 7L, 8L, 9L, 11L, 13L, 15L, 3L, 4L, 5L, 14L,
6L, 10L, 12L, 1L, 2L, 7L, 8L, 9L, 11L, 13L), .Label = c("Australia",
"China", "Denmark", "Finland", "France", "Germany", "Hong Kong",
"Indonesia", "Malaysia", "Norway", "Singapore", "Sweden", "Thailand",
"UK", "US"), class = "factor"), issue = c("Percentage who think climate \nchange is 'not a serious problem' ",
"Percentage who think climate \nchange is 'not a serious problem' ",
"Percentage who think climate \nchange is 'not a serious problem' ",
"Percentage who think climate \nchange is 'not a serious problem' ",
"Percentage who think climate \nchange is 'not a serious problem' ",
"Percentage who think climate \nchange is 'not a serious problem' ",
"Percentage who think climate \nchange is 'not a serious problem' ",
"Percentage who think climate \nchange is 'not a serious problem' ",
"Percentage who think climate \nchange is 'not a serious problem' ",
"Percentage who think climate \nchange is 'not a serious problem' ",
"Percentage who think climate \nchange is 'not a serious problem' ",
"Percentage who think climate \nchange is 'not a serious problem' ",
"Percentage who think climate \nchange is 'not a serious problem' ",
"Percentage who think climate \nchange is 'not a serious problem' ",
"Percentage who think climate \nchange is 'not a serious problem' ",
"Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change",
"Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change",
"Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change",
"Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change",
"Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change",
"Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change",
"Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change",
"Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change",
"Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change",
"Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change",
"Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change",
"Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change",
"Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change",
"Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change",
"Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change"
), perc = c(32L, 14L, 23L, 10L, 26L, 11L, 22L, 18L, 11L, 4L,
5L, 3L, 2L, 5L, 6L, -17L, -4L, -8L, -3L, -7L, -4L, -10L, -8L,
-3L, -1L, -1L, -1L, -1L, -1L, -1L)), .Names = c("country", "issue",
"perc"), row.names = c(NA, -30L), class = "data.frame")

p <- ggplot(dta, aes(reorder(country,perc),perc,fill=issue)) +
geom_bar(subset = .(issue == "Percentage who think climate \nchange is 'not a serious problem' "), stat = "identity",colour="black",alpha=0.5) +
annotate("text",x = 16.5, y = -12,label=dta$issue[16], fontface="bold")+
geom_bar(subset = .(issue == "Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change"),colour="black", stat = "identity",alpha=0.5) +
annotate("text",x = 16.5, y = 15,label=dta$issue[1], fontface="bold")+
scale_fill_manual(values = c("#F7320B", "#2BC931"))+
geom_text(subset = .(issue == "Percentage who think climate \nchange is 'not a serious problem' "),
aes(label=perc.a), position="dodge", hjust=-.35)+
geom_text(subset = .(issue == "Percentage that want their country's strategy not to agree \nto any international agreement that addresses climate change"),colour="black", stat = "identity",aes(label=perc.b), position="dodge", hjust=2)+
coord_flip() +
xlab("")+
ylab("")+
scale_x_discrete(expand=c(0.2,0.55))+
scale_y_continuous(limits=c(-22,32),
breaks = c(-17,-10,0,10,32),
labels = paste0(as.character(c(17,10,0,10,32), "%")))+
theme(axis.text.y  = element_text(size=13,hjust=1),
axis.text = element_text(colour = "black"),
plot.background = element_blank(),
panel.background = element_blank(),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.border = element_blank(),
axis.ticks = element_blank(),
axis.text.x = element_blank(),
legend.background =element_rect("white"),
legend.position="none",
strip.background = element_rect(fill = "white", colour = "white"),
strip.text.x = element_text(size = 13))

ggsave("newplot.pdf",p,scale=2)