Saturday, April 22, 2017

ArcObjects - Creating Polygon Features

We learn how to create polygon feature using two approaches. Rubberband and point collections. 


Sunday, April 9, 2017

Creating Polyline features using ArcObjects on Enterprise Geodatabase

In this episode, we show two different methods to construct line then we persist them as features in a geodatabase. First method using RubberBand (2:29), the second using a collection of points (7:00) GitHub repo Enjoy!

Subscribe to get the free weekly stuff!

Friday, April 7, 2017

Inserting Bulk Features using ArcObjects into PostgreSQL Enterprise Geodatabase

In this episode we learn how to insert bulk of features into the geodatabase using buffering and insert cursor. The original method of using .store is good if you are planning to create few features and verify them, however it becomes slow when working with large number of features. So that is why we use this approach to insert bulk features. GitHub repo 

Tuesday, April 4, 2017

Set up a PostgreSQL Instance on Linux to work wth ArcGIS

Few weeks back we did an episode on how to configure PostgreSQL database on Windows machine to work with ArcGIS. It was popular I decided to do a similar one but on Linux. 

We discuss how to setup postgresql instance from scratch on Ubuntu 16.04.2 LTS, configure the instance to work with ArcGIS and then create an enterprise geodatabase from ArcGIS Desktop and finally we connect to the geodatabase.

Hope you guys enjoy it, link below.


Saturday, April 1, 2017

Creating a button on ArcMap

In this video we discuss how to customize ArcMap to add our own button. When the button is clicked we will list all the layers in the active map.

Thursday, March 30, 2017

ArcObjects - Creating Point Features

We show how two different methods to construct point geometries then we persist them as feature in a geodatabase.

Sunday, March 26, 2017

Set up a Postgres Instance to work with ArcGIS [Windows]

We discuss how to set up a postgres instance from scratch, configure the instance to work with ArcGIS and then create an enterprise geodatabase from ArcGIS Desktop and finally we connect to the geodatabase.

Thursday, March 23, 2017

Set up an ArcGIS Desktop Development Environment

In this video, we discuss how to setup an ArcGIS Desktop environment for development from scratch. We start with a clean 8.1 Windows machine and then we install the required software. These are the list of software required.

1. ArcGIS Desktop 10.4.1

2. Visual Studio Community 2015

3. ArcObjects SDK For Microsoft .NET Framework

Download ArcGIS for 60 days, details here:

Download Visual Studio 2015 Community

Questions and comments are welcomed!

-Hussein Nasser

Monday, November 28, 2016

Multi-User Geodatabase Youtube Series

After a long break, we are back to the channel with a new series. I always wanted to start recording episodes on Enterprise Geodatabase, after all this is what get used in real production shops!

We will tackle, as always, a real-life example of implementation of the multi-user geodatabase. Throw ]any questions you would like answered in this series

Hope you guys like it.

Sunday, August 7, 2016

ACID (Part 4)

In the previous post, we discussed the Read Committed Isolation level. That level solved 1 type of read phenomena, Dirty Read, which we used to get in the Read Uncommited isolation level, but we still got Repeatable and phantom read with that level. In this post we talk about repeatable read isolation level. A slightly more expensive level of implementation but can kill the non-repeatable read phenomena.

The final state of our Like table look like this from the previous post.

Here is the reference to all ACID posts that I authored:

ACID (Part 1) - Atomicity and Consistency
ACID (Part 2) - Isolation (Read Uncommitted)
ACID (Part 3) - Isolation (Read Committed)
ACID (Part 4) - Isolation (Repeatable Read)

Repeatable Read

With a repeatable read isolation level, we not only read entries from committed transactions but we also read it from a previous timestamp. Usually it is the moment from we began our transaction, any other transactions that update entries after that moment will be overlooked by our transaction and it will attempt to retrieve an older "version" of the truth to make sure results are consistent. Lets jump to examples.

As we see, Eddard has already liked picture 2, he is attempting to fire another like to the same picture, we have atomicity and consistency that prevent him from doing so but lets see what happens. Eddard fires up a like on picture 2, and fraction of a second later Sansa loads up picture 2, this will retrieve the number of likes and the list of users who liked it. 

Eddard sends the like, first query executes successfully, incrementing the likes count.

Before Eddard second query executes, Sansa's Select kicks in to read picture 2 row, she is going to get 3 likes instead of 4. This is because we are operating under the repeatable read isolation level, which only reads committed transactions, and since Eddard still did not commit (or rollback) his transaction, Sansa is getting the current commited value. So we have avoided a dirty read phenomena.

Sansa issues another read to the Like table to get all the users who likes picture 2, she gets three rows, Jon, Reek and Eddard. Consistent with the number of likes she got.

Eddard transaction moves on and executes the second query which fails because of the constraint we have in place. Rolling back the entry for likes back to 3.

Eddard gives up, his transaction is finished and he failed to ruin our system consistent state, (we also chopped off his head). Meanwhile a new user comes in, Cersei and burns the database to the ground with wild fire. Not really, she likes picture 2, she is a brand new user who never liked picture 2 before so his transaction commits fine. 

Sansa's transaction is still running she is querying other tables, doing some stuff, updating the view count perhaps, and then finally, she comes back for a final read of picture 2 getting the likes count. Although the final committed value is 4, Sansa is going to get the original committed value when her transaction began which was 3. So we avoided getting the non-repeatable read phenamona with Repeatable read isolation level. It is slightly expensive since we have to keep history of versions of each committed values and go back searching for a previous value with a timestamp.

She issues a final read to the Like table to find out the list of users who liked picture 2, and surprise surprise, she got an extra record, hence Phantom read is still reproducible with repeatable read isolation level.

So we fix one problem with this isolation level but we introduce a cost of keeping history versions of previous committed values which we didn't have to with Read Committed level.

Next up, serializable isolation level.