This notebook walks through an example of KMeans clustering crime data with alcohol license locations. This clustering is performed solely based on the Lat/Long locations of stores and crimes. The tools I use are

The most basic question being answered is:

Given Lat/Long - can we draw some association between a liquor store's centroid and crime/a type of crime's centroid? Or another way, will groups of crime overlap with groups of liquor stores.

The data we're using is from SFGOV as well as the Alcoholic Beverage Control.

%matplotlib inline
import pandas as pd
import numpy as np
from pandas.tools.plotting import scatter\_matrix
from sklearn.cross\_validation import train\_test\_split
import matplotlib.pyplot as plt
import random
pd.options.display.mpl\_style = 'default'
alc = pd.read\_csv("data/alcohol\_licenses\_locations.csv")
crime = pd.read\_csv("data/Map\_\_Crime\_Incidents\_-\_from\_1\_Jan\_2003\_REDUCED.csv")
alc.columns
Index([u'Unnamed: 0', u'Join\_Count', u'Status', u'Score', u'Match\_type', u'Side', u'X', u'Y', u'Match\_addr', u'ARC\_Street', u'Entry\_no', u'Owner\_name', u'street', u'city', u'state', u'zip', u'Entry\_no\_1', u'License\_Nu', u'Status\_1', u'License\_Ty', u'Orig\_Iss\_D', u'Expir\_Date', u'Census\_tra', u'Business\_N', u'Mailing\_Ad', u'Geo\_Code', u'Tract2010', u'coords.x1', u'coords.x2'], dtype='object')

crime.columns
Index([u'IncidntNum', u'Category', u'Descript', u'DayOfWeek', u'Date', u'Time', u'PdDistrict', u'Resolution', u'Address', u'X', u'Y', u'Location'], dtype='object')

This is an outer join combining the reduced crime set and the alcohol licenses location data. It performs a join on the X & Y Columns (lat & Lon).

combo = pd.merge(alc, crime, on=['X','Y'], how='outer')

At this point I am reducing the license types to just 20 and 21 - which are offsite types.

Reference dictionary here: http://www.abc.ca.gov/datport/SubAnnStatRep.pdf

features = ['X','Y']

K Means

http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html

from sklearn.cluster import KMeans

Clustering Liquor Stores

I'm just look at just 20, 21 license types because these are for off-site sales.

print len(alc)
alc = alc[(alc['License\_Ty'] == 20) | (alc['License\_Ty'] == 21)]
print len(alc)

3635
809

alc\_X = alc[features]


for num\_clusters in range(10,75,5):
    km = KMeans(num\_clusters)
    km\_fit = km.fit(alc\_X)
    ax = alc\_X.plot(kind='scatter',x='X',y='Y', legend=str(num\_clusters), figsize=(8, 6))
    pd.DataFrame(km\_fit.cluster\_centers\_).plot(kind='scatter',x=0,y=1,color='k',ax=ax)
    ax.set\_title(str(num\_clusters) + " License 20 & 21 Clusters")

png

png

png

png

A thoroughly unscientific analysis had 55 clusters for the alcohol stores jump out at me as a approximately correct measure - it seems to be a decent balance of different spots on the map.

Proceeding with 55 clusters, feel free to change as you see fit

num\_clusters = 55
liq\_km = KMeans(num\_clusters)
liq\_km\_fit = liq\_km.fit(alc\_X)
liq\_ax = alc\_X.plot(kind='scatter',x='X',y='Y', legend=str(num\_clusters), figsize=(8, 6))
pd.DataFrame(liq\_km\_fit.cluster\_centers\_).plot(kind='scatter',x=0,y=1,color='k',ax=liq\_ax)
liq\_ax.set\_title(str(num\_clusters) + " License 20 & 21 Clusters")

png

Now we're going to repeat the process for crime locations. The goal here is to see if there are any location overlaps in the above stores + crime. Eventually we'll move into categories of crimes.

Clustering Crime Categories

print crime.Category.unique()
crime.Date = crime.Date.apply(pd.to\_datetime)

['ASSAULT' 'OTHER OFFENSES' 'NON-CRIMINAL' 'SEX OFFENSES, FORCIBLE'
 'SUSPICIOUS OCC' 'DRUG/NARCOTIC' 'WEAPON LAWS' 'VANDALISM' 'TRESPASS'
 'SECONDARY CODES' 'DRIVING UNDER THE INFLUENCE' 'FAMILY OFFENSES'
 'DRUNKENNESS' 'LOITERING' 'PROSTITUTION' 'LIQUOR LAWS'
 'DISORDERLY CONDUCT' 'SUICIDE' 'SEX OFFENSES, NON FORCIBLE'
 'PORNOGRAPHY/OBSCENE MAT']

sub\_crime = crime[(crime['Category'] == "ASSAULT")] #look at just Assaults

I looked at just assaults in order to dive a bit deeper into the data itself.

print len(crime)
sub\_crime = crime[(crime['Category'] == "ASSAULT")]
print len(sub\_crime)
sub\_crime = sub\_crime[sub\_crime.Date > '2013-1-1'][sub\_crime.Date < '2014-1-1'].reset\_index()
crime\_X = sub\_crime[features]
print len(crime\_X)

404080
64033
12588


/Library/Python/2.7/site-packages/pandas/core/frame.py:1771: UserWarning: Boolean Series key will be reindexed to match DataFrame index.
  "DataFrame index.", UserWarning)



for num\_clusters in range(10,75,5):
    km = KMeans(num\_clusters)
    km\_fit = km.fit(crime\_X)
    ax = crime\_X.plot(kind='scatter',x='X',y='Y', legend=str(num\_clusters), figsize=(8, 6))
    pd.DataFrame(km\_fit.cluster\_centers\_).plot(kind='scatter',x=0,y=1,color='k',ax=ax)
    ax.set\_title(str(num\_clusters) + " Assault Clusters")

png

png

png

png

num\_clusters = 55
crime\_km = KMeans(num\_clusters)
crime\_km\_fit = crime\_km.fit(crime\_X)
crime\_ax = crime\_X.plot(kind='scatter',x='X',y='Y', legend=str(num\_clusters), figsize=(8, 6))
pd.DataFrame(crime\_km\_fit.cluster\_centers\_).plot(kind='scatter',x=0,y=1,color='k',ax=crime\_ax)
crime\_ax.set\_title(str(num\_clusters) + " Assault Clusters")

png

Blue are the underlying liquor Stores

Red are Assault Centroids

Black are Liquor Store Centroids

alc\_base = alc\_X.plot(kind='scatter',x='X',y='Y', legend=str(num\_clusters), figsize=(10, 8))
pd.DataFrame(crime\_km\_fit.cluster\_centers\_).plot(kind='scatter',x=0,y=1,color='r', ax=alc\_base)
pd.DataFrame(liq\_km\_fit.cluster\_centers\_).plot(kind='scatter',x=0,y=1,color='k',ax=alc\_base)
alc\_base.set\_title("Clustering of Assaults and Liquor Store")

png

Leaflet

I also decided to plot this data in Leaflet to exercise my JavaScript skills a bit. Leaflet is a javascript plotting library.

  • Red: Assault (Centroids being larger)
  • Black: Liquor Stores (Centroids being larger)

Conclusion

It appears that there is at least some basic correlation between crimes and liquor stores. Obviously this will vary with the type of crime but it is worth exploring further. This was not intended to be a scientific analysis - much more of an exploration. Due to any number of biases, this information is not something that, at face value, you can derive explicit relationships. I wanted to play around with a visual display of k-means and sci-kit learn.