A [machine learning](https://hackernoon.com/tagged/machine-learning) [algorithm](https://hackernoon.com/tagged/algorithm) using image data from the New York City Department of Transportation showed that during a 10 day period in December 2017, on one street in Harlem, New York City, the\n\n**bus stop was blocked 57% of the time (55% weekdays 7am to 7pm)**\n\n**bike lane was blocked 40% of the time (57% weekdays 7am to 7pm)**\n\n!(https://hackernoon.com/hn-images/1*MYS95vc2-z3XnYEEc8mZTQ.gif)\n\nLeft: Computer Identified Vehicles Right: Computer Identified Violators\n\nThe press is really excited about self driving cars and the automation of driving jobs. Less attention is paid to how the same advancements in computer vision will be superior to humans at jobs such as parking enforcement where the basic requirement is detecting a vehicle’s location, determining if the location is legal, and taking appropriate action.\n\n!(https://hackernoon.com/hn-images/1*U1i6obsq8cXcR59ZuNrraQ.jpeg)\n\nAutonomous Vehicle View Source:Bosch\n\nThere are [three thousand traffic enforcement officers in NYC](http://www.nytimes.com/2013/11/29/nyregion/bangladeshis-build-careers-in-new-york-traffic.html); here is how video cameras and computers can do the same job with higher enforcement levels, less cost and greater equality.\n\n### Prototype\n\n!(https://hackernoon.com/hn-images/1*gh3jhG86sY2CopdOQ6QS3Q.jpeg)\n\nNYC DOT Traffic Camera Source: invisibleboxes.info\n\nThe New York City Department of Transportation (NYC DOT) maintains several hundred cameras throughout the city that post images in realtime to [http://dotsignals.org/](http://dotsignals.org/). To test the possible benefits of robo traffic cops I built a system which highlights parking violations at the corner of [Saint Nicholas Ave and 145 street](https://www.google.com/maps/place/W+145th+St+%26+St+Nicholas+Ave,+New+York,+NYemail@example.com,-73.9469563,17z/data=!3m1!4b1!4m5!3m4!1s0x89c2f67c7dfaf285:0xdf5c9890fd51c22b!8m2!3d40.8240437!4d-73.9447676). **For a quick prototype its results are pretty interesting.**\n\n!(https://hackernoon.com/hn-images/1*phdcHWAa0VR10n1YQUg9qg.png)\n\ngreen = bike lanes, blue = bus stop, red = cars on sidewalk\n\nThe avenue has unprotected bikes lane in both directions highlighted in green. Throughout the city, unprotected bike lanes are often [illegally](http://www.nyc.gov/html/dot/downloads/pdf/bikelaneparking.pdf) blocked by cars, trucks and especially [UPS TRUCKS](https://www.dnainfo.com/new-york/20151002/central-harlem/harlem-bicyclist-sues-ups-for-routinely-parking-trucks-bike-lanes). On the right of the image in blue is a bus stop but instead functions as a parking spot. [NYC public buses crawl at dismal speeds because of blocked bus lanes and stops](http://busturnaround.nyc/).\n\nThe url for the camera at 145th and Saint Nicholas Avenue is [http://dotsignals.org/google\\_popup.php?cid=532](http://dotsignals.org/google_popup.php?cid=532). The machine learning algorithm code is available [here](https://github.com/Bellspringsteen/OurCamera). In short, a pre-trained [Tensorflow](https://www.tensorflow.org/) model was re-trained on [~2000 pre-classified images](https://github.com/Bellspringsteen/OurCamera/tree/master/data/images). Then the model was used to classify the vehicles on the road and where they are stopped.\n\n!(https://hackernoon.com/hn-images/1*MYS95vc2-z3XnYEEc8mZTQ.gif)\n\n**The Results**\n\nDuring a 10 day period in December 2017,\n\n**bike lane was blocked 40% of the time (57% weekdays 7am to 7pm)**\n\n**bus stop was blocked 57% of the time (55% weekdays 7am to 7pm)**\n\nKeep in mind this is just one average block. This means if you are riding in a bike lane you are swerving blind in and out of the bike lane every other block. And if you are on a bus your commute just got longer.\n\n!(https://hackernoon.com/hn-images/1*h_mF6Qud7Aj6EW9ekTNbXA.png)\n\n!(https://hackernoon.com/hn-images/1*9iDxCT0yNafhjoTFzs4lIQ.png)\n\n!(https://hackernoon.com/hn-images/1*1Yv9BCWz8NitFiuzp6OLnw.png)\n\n!(https://hackernoon.com/hn-images/1*KcCcykmbCepRtS3XPX2HnQ.png)\n\n### Current Solution: Traffic Cops\n\nIn NYC there are [~3000 traffic police officers](http://www.nytimes.com/2013/11/29/nyregion/bangladeshis-build-careers-in-new-york-traffic.html), who make from [$30k-$45k per year](https://www1.nyc.gov/site/nypd/careers/civilians/traffic-enforcement-agents-benefits.page) (let’s estimate $48k-$72k [including benefits, health, retirement](https://www.bls.gov/news.release/ecec.nr0.htm)) and gave [seven million tickets](https://data.cityofnewyork.us/City-Government/Parking-Violations-Issued-Fiscal-Year-2016/kiv2-tbus/data#SaveAs)(filter by issuer T) which is [~1 ticket per agent per working day hour](http://bfy.tw/Ggoe) in 2014.\n\nIf the block at 145th and Saint Nicholas is indicative of other blocks in the city than we can assume that the number of tickets being given represents less than .0001% of infractions. So ~150 million in pay & benefits for traffic cops yielded NYC .0001% of enforcement.\n\n### Future Solution:Camera Traffic Cops\n\n#### Higher Enforcement Levels\n\nIf NYC DOT put four cameras at each intersection there would be near complete and total coverage throughout the city 24x7. There is simply no way that deploying police officers could match this level of enforcement.\n\n#### Less Cost\n\nAccording to the DOT there are [12,460 traffic intersections in NYC.](http://www.nyc.gov/html/dot/html/infrastructure/signals.shtml) If we assume that we need 4 cameras per intersection let’s round up and say we need 50,000 traffic cameras. The hypothetical robo traffic cop could be setup for either onboard processing or central processing. For onboard processing the cameras would include a computer vision system and a cellular connection to report video proof of infractions. For central processing the cameras would be connected by fiber back to a central server for group processing. Onboard processing would have higher per camera cost but less installation cost with central processing the opposite tradeoff.\n\n* No matter which system, in order to be less expensive than current traffic enforcement each camera could cost $3,000 per year and NYC would get complete coverage of every street! $48k to $72k per officer \\* 3,000 officers / 50,000 cameras = $2.8k to $4.3k\n* **All the camera systems discussed above would be much cheaper than the human officers who cover a tiny fraction of the city.** This article does not even attempt to address the huge issues of US labor productivity stagnation, unequal unemployment, public agency labor relations, and income inequality.\n\n#### Equality\n\nAll laws are not equally enforced. Currently police officers decide on whom to enforce the law leading to inequality in enforcement often based on race and income. That said, [algorithms can be just as biased as police](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing). They are not perfect and can potentially be more biased than people. For instance a computer vision traffic enforcement system might erroneously single out certain groups for extra ticketing based on some visual characteristic of the vehicles. But as long as the data and algorithms are open for scrutiny we can work to eliminate bias to an extent not possible with police officer enforcement.\n\n### Suggestions\n\nNYC DOT should evaluate installing four cameras at each intersection in NYC. The camera feeds should be higher quality but **blur faces onboard the camera before transmitting to the internet**. The feeds should be open to all. The NYC DOT should not require NY State approval to install the cameras. The NYPD should contract out to an external company to build and maintain an open source system to process the video feeds using machine learning to identify all moving and parking violations. The system should present each violation to NYPD officers behind a computer screen who could review the footage and issue the ticket. This system, if implemented reasonable well, would be more effective, lower cost, and more equal than the current system.\n\n#### Dear NYC, I am interested in working on this. DOT, NYPD, NYCGov how can I help you evaluate and create this system?\n\n!(https://i.ytimg.com/vi/uzYQfrYAnM0/hqdefault.jpg)\n\n### Appendix\n\n#### What would Richard Stallman say? What if I already have a big brother? What is the price of privacy?\n\nRichard Stallman ([rms](https://en.wikipedia.org/wiki/Richard_Stallman)) would probably say cameras on each NYC block [imperils](https://www.gnu.org/philosophy/surveillance-vs-democracy.en.html) [democracy](https://www.youtube.com/watch?v=Q4b26gnsI24). Sometimes I agree that increased surveillance is bad for society and will make it easier for our government and companies to prey upon society. But I also think that the current methods of enforcement (i.e. police) already prey on society, it’s just the poor and people of color who are most harmed. They are harmed by uneven enforcement of some laws and a lack of enforcement of others. I hope a system of video surveillance might improve this. At this point I think any change from the status quo would get us better results or at the least highlight the problems in the current system. I vote bring the cameras and surveillance to the streets but make the video feeds **open to all** and let’s see what happens.\n\nTactically, I suggest a few things which should alleviate some privacy concerns and potentially introduce other issues. Firstly, the camera systems should blur all faces onboard the camera before transmitting to the internet. Secondly, the feeds should be open to all. Open to all? But then terrorists, stalkers, spousal abusers etc will have access to the cameras. Yes, but these people could already go down and view the street. By making the camera system open it also means that researchers, investigative journalists and lawyers could evaluate and monitor the performance of the data collection and algorithms to search and report on bias and inequality in enforcement. **This should be a bigger conversation, not enough people are discussing the pros and cons.**\n\n### Questions\n\n* People hate speed cameras/red light cameras, this is just bad.\n\nPeople who speed and go through red lights hate those cameras.\n\n* Bicyclists and buses suck\n\nOk\n\n* How would you get license plates?\n\nThe current DOT cameras are connected via Stealth Communication Inc which feeds fiber optics to each camera. The cameras installed are already capable of sufficient video resolution to detect license plates, the DOT has just decided not to broadcast higher resolution video.\n\n* In the video why aren’t the detection boxes around all the cars? Why didn’t cars on the sidewalk get boxes?\n\nThe algorithm isn’t perfect and was a quick first attempt. So there is a lot of improvement to further identify other types of vehicles. The training data was only vehicles front and backs, so the algorithms isn’t trained to identify the sides of vehicles.\n\n* In the video there is a scene when rain covers the lens and the algorithm thinks there are cars all over the road\n\nSee above, the algorithm is only a first implementation. But this is a real world concern and something to think about for a solution at scale. If the camera can’t see than the system doesn’t work. For the data represented here the time when the image was covered in rain represented a small portion of the total time and doesn’t significantly affect the results.\n\n* Doesn’t NYC need NY State to install traffic cameras? If so, is this whole idea dead in the water?\n\nNY State sets out the traffic law for the state. So yes, in order to give automated tickets New York State has to approve certain installations. However, the DOT can install as many cameras as it wants. And I believe that the NYPD can, within the current law, issue tickets to motorists for violations they witness through cameras. So I think the system can be architected so as to not require NY State approval.\n\n* Why not just build protected bike lanes and protected bus lanes and stops?\n\nYes. We should do that. But that infrastructure might be even more expensive. And cars still find ways to block protected bike lanes and block protected bus stops.\n\n* Why is the data for only 10 days? Why is this only one camera?\n\nFor the 10 days of video my computer recorded over 800,000 images.\n\nIt took Amazon AWS p2.xlarge instances running the model take .3 seconds per image. This means it took days and approximately $65 to process the 10 days worth of images. However, the cost is not linear so a greater number of cameras would be cheaper per camera to process.\n\nIn short, while it is much cheaper than employing thousands of police officers, it still costs some money to run the algorithms and collect the data.\n\n* Don’t these cameras already exist?\n\nThere are already some [bus lane](https://patch.com/new-york/queens/bus-lane-camera-violations-active-q52-q53-routes-dot) [cameras](https://cityroom.blogs.nytimes.com/2010/11/22/smile-youre-on-bus-lane-camera/?mtrref=www.google.com). There are also some red light and speed cameras. They are few and far between. They are also entirely closed. I.e. we the people have no way of determining their number, their algorithms, their efficacy, their equality etc.\n\nIf these cameras are on 145th and Saint Nicholas then they are not working. That much should be obvious.\n\n* What other violations could this setup catch? What other things could you do?\n\nCount traffic, vehicle speed, pedestrians, bicyclists etc.\n\nTrucks over 53’ [https://twitter.com/illegal53NYC](https://twitter.com/illegal53NYC)\n\nAre building owners or store owners shoveling the sidewalks? In time?\n\nShould we send out snow trucks to every street? Or deploy them to streets that video can verify have snow blocking them. Same for street cleaning. Should street cleaning happen even if the streets are clean? Trash pickup?\n\nCongestion pricing requires determining where vehicles are in the city and at what time in order to apply surcharges.\n\nTLC violations. The taxi and limousine commission has a great program, [http://www.nyc.gov/html/tlc/html/passenger/sub\\_consumer\\_compl.shtml](http://www.nyc.gov/html/tlc/html/passenger/sub_consumer_compl.shtml), in which you can submit a picture of a offending vehicle, say a taxi in a bike lane, and the TLC will after a process fine the driver. Why shouldn’t this exist citywide for all vehicles?\n\n### Technical\n\nThe github [page](https://github.com/Bellspringsteen/OurCamera) has all the code.\n\nSome general things learned.\n\n1. The cameras are low resolution 352x240 CIF CTV and update at a framerate of ~1 fps.\n2. My Intel Core i7–4770K took about 24 hours to retrain the last layer of the model. I thought it would take longer and I would need to use cloud GPU’s.\n3. [Tensorflow](https://www.tensorflow.org/) was really easy to get started using. I had taken some Machine Learning classes 8 years ago and things are getting easier and easier to get started.\n4. I had 800,000 images to process with the model to look for vehicle positions. Each 352x240 image took 5 seconds to analyze on my Intel Core i7–4770K CPU only. I thought it would be faster. I ended up using Amazon AWS p2.xlarge with 1 Nvidia K80 and it took .3 seconds per image. I also thought that would be faster. **Side note: I tried to use google cloud but their response time in authorizing my account for ML usage was 2 days. I gave up after 12 hour and switched to Amazon who approved my GPU usage after 10 minutes.**\n5. As you can see in the code the determining of whether or not the vehicle is in breach of the traffic law was not done with machine learning. Instead I just drew a simple polygon representing bus and bike lanes and counted a vehicle as blocking if it was the correct type of vehicle, i.e. not a bus, and whether the center of the vehicle was in the lane.\n6. I considered training the algorithm to try to spot bike lanes and bus stops and do its own determination. But I was concerned that with the street markings barely visible it would be very difficult. So instead, I just drew the lanes because I could visit the street and determine the actual markings. If this was to be used at scale there might have to be a manual portion of drawing or determining the lanes/stops and mapping that to pixels.\n7. I want to take a look at using YOLO for object detection.\n8. Specific semantics for certain vehicles- I tried annotating the images with identifiers for further subsets of vehicles such as taxis, ups trucks, or police cars with less impressive results. [I will have to use embeddings to further classify the types of vehicles.](https://youtu.be/LSX3qdy0dFg?t=2452)\n9. I purchased a Movidius Tensor Compute stick and want to experiment running the detection in realtime on board a low power system suitable for mounting to a street pole and running off solar.