Jekyll2020-06-22T15:26:33+00:00http://blog.neerajadhikari.com.np/feed.xmlNeeraj Adhikari’s BlogI am a programmer with interests in algorithms, mathematics, linux, free software, languages (both natural and programming), and more. I like to read a lot and write sometimes.Thinking about language2020-06-15T00:00:00+00:002020-06-15T00:00:00+00:00http://blog.neerajadhikari.com.np/thinking-about-language<p>I’ve always been fond of thinking and talking about language, but recently I’ve
been thinking a lot about it. My first language is Nepali, in the sense that is
the first language I learnt. But for the most part I think in English, and I
find it much easier to write in English. So in that sense the “firstness” of
Nepali is in question. When it comes to speaking I find Nepali to be easier
than English by leaps and bounds. I’ve had tons of practice speaking Nepali,
and a lot of practice writing English, but very little practice otherwise.</p>
<h4 id="speechless">Speechless</h4>
<p>A few days ago when I was talking on the phone to a friend, I noticed something
curious — that even in speech, it is really hard for me to express complex
thoughts in Nepali. Ironically, what I was about to say to him was something
along the lines of, “I have recently noticed that I find it hard to express
anything beyond simple ideas in speaking.” This is not exactly a convoluted
philosophical argument, it is something people might need to say in regular
speech. If you can’t easily say something like that in a language you know, it
brings into doubt your knowledge of the language in question. Yet when I tried
to say it in Nepali I couldn’t find the words right off the top of my head. It
was difficult to start. In contrast, right after that, I was feeling a little
uncomfortable due to the heat and I said to my friend, “एकैछिन है, यो झ्याल खोल्छु” .
(<em>Just a sec, I’ll open this window</em>). Just as I said that I noticed how easy
it was for me to say. There was no noticeable gap between thinking the thought
and expressing it – it was near instant. It is a relatively simple thought,
and it is also something I’ve said many times before.</p>
<p>This — not being able to express something in my native language — is a
frequent occurrence. And I’m sure this is not limited to myself. Most native
Nepali speakers of my generation, myself included, have moved to speaking in
a mix of English and Nepali. It is common to use whole English sentences in
conversation that is primarily in Nepali, and it’s exceedingly rare to go a few
sentences without using English words. When I express concern about this, most
people counter with perfectly valid points. Some say that change in a language
is inevitable and we can’t really do anything to stop it. Some say language is
just something we use to get our point across and I shouldn’t worry about
people using the easiest means of doing so. And I understand that. And I’m not
saying that a language should remain ‘pure’ and resist adulteration, nor that
speakers should speak strictly a single language at a time. Considering how
much languages loan words from other languages, it might not even be possible
to draw a strict line between words of a certain language and words outside
that language. What I’m worried about is more emotional. More personal. It is
like a cherished childhood memory slowly but surely slipping away, and the
knowledge that there isn’t much you can do about it. It is like finding that
something you considered a big part of your identity and your heritage faded
and about to disappear, like looking at your insides and finding an organ
slowly eroding away.</p>
<h4 id="at-a-loss-for-words">At a loss for words</h4>
<p>While all this may not be a problem for speakers of the language, it is a
problem for the language itself. We think of language as having a life of its
own in a sense, and its speakers not being able to use it well is a threat to
its life. Now I’m not qualified to talk confidently on all the reasons why that
may be so, but one thing seems clear — small languages have a harder
time staying intact in these super-connected times. They’re too susceptible to
being displaced by larger languages. Much, much more media worth consuming is
produced in English, or even in Hindi, than it is in Nepali. People by
necessity have to read more books in English, watch more TV and movies in
English and laugh at more memes in English, by a huge margin. Another factor
that prevents fluent and sustained use of a language is the lack of words for
the ideas people frequently need to talk about (a word like <em>charger</em>, for
instance), and even more than that a lack of such words that sound natural in
speech (A <em>drug addict</em> could be, but is rarely called a <em>लागुऔषध दुर्व्यसनी</em>.)</p>
<p>Another aspect of the erosion of Nepali is the disappearing use of its native
alphabet, Devanagari. This is something which all languages which use non-Latin
alphabets have experienced in the Internet age. Input is at least possible in
almost every script of the world thanks to the Unicode standard, but easy and
convenient input is a different matter. Nepali-speaking netizens use romanized
Nepali much more than they use Devanagari. People find it difficult to type
in Devanagari, and not without reason. Just having more letters than Latin’s 26
makes it difficult to build a good input method. And this is a problem that
compounds the one we have been talking about. Writing in the Latin script makes
it easy to use English words, while writing in Devanagari makes it harder so
switching to Latin is the natural thing to do.</p>
<p>Unfortunately, for all my worries, a language is a collective experience, and
it exists on the minds of the millions that speak it, which means there’s
little hope of the efforts of a few being able to influence its course. Even
entities of power struggle to impose rules to ensure uniformity. There’s
nothing to do except to accept the state of affairs in silent submission. Or is
there? When you think of it, for every new word that later becomes an
indispensable part of a language, there’s that one person who used it first.
One may coin a word for the simple reason of being able to express their ideas
better, and a word may, with luck on its side, spread its wings and thrive in
wider usage. Journalists and writers are in a particularly good position to do
this, but with the Internet, so is everyone else. Of the many ways of creating
new words, the single most common in Nepali is borrowing. We have taken an
amazing number of words from English and incorporated it into daily speech in
recent times (I’ve never called a table anything other than a table in Nepali).
With time a lot of these loans change in pronunciation, become classified as
<em>आगन्तुक शब्द</em> (<em>guest words</em>) and we start to forget they come from elsewhere.
Examples of this are <em>बोतल</em> (<em>bottle</em>) and <em>गिलास</em> (<em>glass</em>). But the problem
with using too many loan words is that the language starts to lose its
originality and identity, it becomes too dependent on other languages. Another
mechanism of word formation we see is loaning words that have been
<a href="https://en.wikipedia.org/wiki/Compound_(linguistics)">compounded</a> or
<a href="https://en.wikipedia.org/wiki/Calque">calqued</a> in Sanskrit. For example, you
find in dictionaries that a microscope is called a <em>सूक्ष्म-दर्शक यन्त्र</em> (literally
a <em>small-displayer machine</em> in Sanskrit). In my opinion that sounds stiff and
unnatural. We have in our brains an unspoken idea of what colloquial Nepali
sounds like and that word doesn’t seem to fit. Another example of this is when
we have two more or less equivalent words for something, of which one is a
<a href="https://en.wikipedia.org/wiki/Tatsama">direct loan from Sanskrit</a>. <em>Shameful</em>
is usually rendered as <em>लज्जास्पद</em> but <em>लाजमर्दो</em> is in better use in speech and
sounds more ‘native’.</p>
<h4 id="what-i-mint-to-say">What I mint to say</h4>
<p>In my opinion, new words shouldn’t be limited to direct loans from Sanskrit or
English. There are <a href="https://en.wikipedia.org/wiki/Word_formation">other
possibilities</a>. Word-creation
should get creative. One technique I see promise in is taking calques from
English. One such phrase I have seen used a lot recently in Nepali media is
<em>पत्रु खाना</em>, a direct translation of the words <em>junk food</em>. Another fun way of
inventing words is abbreviation and clipping (like <em>karaoke</em> in Japanese). The
word <em>रुपा</em> has been more or less established in my circle of friends, and it is
an abbreviation of the initial syllables of <em>room partner</em>, roommate. Another
one I love to use (just at my home because me and my mom are the only people
who understand it as of yet) is <em>गपका सूची</em>, an abbreviation of <em>गर्नु पर्ने कामको सूची</em>,
meaning <em>to-do list</em>.</p>
<p>I’ve always been of the opinion that, however impractical that may be, we
should try to use a language frequently and creatively if we want to save it
from a steady decline and eventual extinction. Additionally, now I think that
creating and using freshly minted words is vital. In part because smaller
languages like Nepali already lack expressiveness in many modern contexts, and
because adding new words to the vernacular is a sign of a healthy, thriving
language. What with social media platforms dime a dozen and memes moving with
incredible speed to reach hordes of people, there’s immense possibility for the
amateur word-coiner. Invent a new word! Who knows, it may be the next <em>fleek</em>.
Or the next <em>covfefe</em>.</p>I’ve always been fond of thinking and talking about language, but recently I’ve been thinking a lot about it. My first language is Nepali, in the sense that is the first language I learnt. But for the most part I think in English, and I find it much easier to write in English. So in that sense the “firstness” of Nepali is in question. When it comes to speaking I find Nepali to be easier than English by leaps and bounds. I’ve had tons of practice speaking Nepali, and a lot of practice writing English, but very little practice otherwise.Pi2020-03-14T00:00:00+00:002020-03-14T00:00:00+00:00http://blog.neerajadhikari.com.np/pi-day<p><em>Pie, a dish. A sweet, delicious if cooked right. But today, everyone discusses
another homophone. Yes, pi. The constant that people so revere. Okay, it’s not
rational, but it is magical, appealing. Certainly, truly — pi embodies
pureness. Take a completed circle’s P (called perimeter) and diameter’s
magnitude. Now, divided, gives π — greek alphabet pi —
instantly! Decimal form eternally long, sans clear structure. Pi may —
despite formulas (a plenty bulk) — indeed be actually random. Pi —
although posessing undoubted, provable number of features — has been
obdurate to untie. And thus, to π I tribute!
<br />
― Neeraj</em></p>
<p>If you are wondering why the above piece of text is so erratic, it is an
attempt at talking about pi while also encoding the first 100 digits of pi. The
first word is 3 letters long, the second 1 letter long, the third 4 letters
long, and so on. Ignore all punctuation except the em dash (—), which
stands for 0.</p>
<p>Doing this presented some obvious difficulties. The first hurdle is the fact
that the English language, glorious though it is, has a serious lack of
0-letter words. Even though the first 0 appears only in the 34<sup>th</sup>
position in the decimal expansion of Pi, overall it has it’s fair share of 0s.
At first I tried to get around this by using the digit 0 itself, but then the
whole thing started to be more about 0 than about pi. Another idea I had was of
using a circle pictogram (○) but that made it look a lot less like normal
text. I asked around for ideas, and my friend
<a href="https://www.facebook.com/rashmi.satyal">Rashmi</a> suggested ignoring 0s
altogether. That was an idea I didn’t like one bit, so I ignored her instead.
That’s when my friend <a href="https://bewakes.com">Pandu</a> suggested using dashes and
that made it a lot better.</p>
<p>Besides the conspicuous lack of 0-letter words, another problem with the
English language is the existence — and the abundance — of words
with ten letters or more. Indeed a lot of things that could be said wasn’t
because I couldn’t use longer words. You might have noticed how I substituted
<em>perimeter</em> for <em>circumference</em>. People do crazy things all the time, and
someone more skilled and with more time in hand could probably write a several
thousand long treatise on Pi in this format. But that’s not likely to be me
any time soon.</p>Pie, a dish. A sweet, delicious if cooked right. But today, everyone discusses another homophone. Yes, pi. The constant that people so revere. Okay, it’s not rational, but it is magical, appealing. Certainly, truly — pi embodies pureness. Take a completed circle’s P (called perimeter) and diameter’s magnitude. Now, divided, gives π — greek alphabet pi — instantly! Decimal form eternally long, sans clear structure. Pi may — despite formulas (a plenty bulk) — indeed be actually random. Pi — although posessing undoubted, provable number of features — has been obdurate to untie. And thus, to π I tribute! ― NeerajI Can’t Read Anymore2019-10-11T00:00:00+00:002019-10-11T00:00:00+00:00http://blog.neerajadhikari.com.np/i-cant-read<p>I can’t read anymore. My eyes are fine, I can see and understand words. I don’t
mean I am physically unable to read text. Perhaps if you count everything,
including tweets, Reddit comments, hacker news discussion, YouTube comments and
closed captions, product reviews on Amazon, nutrition labels on food packaging
and so on, in a day I now read more than ever. What I can’t seem to be able to
read is books. Or anything longer than a couple of paragraphs.</p>
<p>Yes, the title is somewhat of a clickbait. Even when only talking about books, a
more accurate title would have been <em>“It is very hard for me nowadays to sustain
my attention on the book I’m reading for a reasonable and previously normal
amount of time”</em>. But this title got your attention, and if you have a normal
attention span (unlike me) please forgive me and bear my writing for a few
more minutes.</p>
<p>Let me give a little more detailed view of situation. As I’ve already said, a
woefully short attention span is one of factors in play here. But it’s more
complicated than that. On most days I don’t even pick up a book. By now you must
be thinking, “So you don’t even pick up a book and declare that you can’t read?
What a liar!”. And I’d be inclined to agree with you. But when you think about
it, “can’t” can mean many things. It’s apparently a spectrum, encompassing a
whole smorgasbord of possibilities. And in this case “I can’t” means “I don’t
end up doing it as much as I would like to”.</p>
<p>I fondly remember times in the past when I would spend whole days reading.
Barely taking a break if I was reading a thrilling page turner, or else
leisurely going from sentence to sentence, enjoying every word. Lying on a bed
or on a sofa, tossing and turning, trying to find the most comfortable
eye-book-hand-body configuration. And yet “leisurely reading a book” feels like
a foreign concept now. To put things into perspective, it has been two whole
months and eleven days since I started reading my “currently reading” book. In
this time, I haven’t even finished a fifth of it. I made virtually zero progress
in the past one and a half months. And on average, the number of books I read
per year has steadily decreased for the past eight or ten years.</p>
<p>The simplest explanation for this is screens. More precisely, it is the Internet
instead of just screens. It is the Internet that gives screens their
attention-sucking power. Of course, the Internet as a technology, in itself,
there’s nothing wrong with it. There’s this endless variety of information
available on the Internet as an immediate consequence of its design, but that’s
nothing compared to how modern internet companies design addictive systems that
keep us hooked, keep us coming for more and more. I’ve already talked a lot
about this topic, and by now I must have started to sound like a broken record,
so I’ll not go further. Here are two great videos by <a href="https://www.youtube.com/watch?v=wf2VxeIm1no">CGP
Grey</a> and <a href="https://www.youtube.com/watch?v=VpHyLG-sc4g">exurb1a</a> on the topic if you don’t know what
I’m talking about. While we’re talking about books, also watch this <a href="https://www.youtube.com/watch?v=lIW5jBrrsS0">BEAUTIFUL
video</a> on about the same topic as
this post.</p>
<p>The point is, I’m an internet addict. (Come to think of it, that would have been
the proper title for this.) My free time is spent looking at screens. If I try
to stop wasting time on something and curtail its use, I end up spending my time
looking at something else instead. Not just my free time, I often end up
spending time I don’t have. And then it’s hello missed deadlines and hello
self-loathing. Internet addiction is a difficult thing to manage because unlike
substance addictions, you can’t cold-turkey quit the Internet. Unless you’re a
hermit, you need it for day-to-day life.</p>
<p>I don’t know what this means for the future, mine and everyone’s. Perhaps it
isn’t worth worrying about a lot. I haven’t heard anyone I know IRL saying that
the Internet has made it harder for them to read books. I’ve only found people
saying this on, ironically, the Internet. So maybe this affects only a small
fraction of people. Whatever the case, I’m trying to find ways to recover from
this condition and become a reader again. I’ve missed the joy of reading, and
there’s a towering stack of books I’ve procrastinated too long on. If I learn
something from my efforts to repair my brain, I’ll share it in a future post.
<em>“One weird trick to make you read THICK books again!!!”</em></p>I can’t read anymore. My eyes are fine, I can see and understand words. I don’t mean I am physically unable to read text. Perhaps if you count everything, including tweets, Reddit comments, hacker news discussion, YouTube comments and closed captions, product reviews on Amazon, nutrition labels on food packaging and so on, in a day I now read more than ever. What I can’t seem to be able to read is books. Or anything longer than a couple of paragraphs.Finding shapes - the simple elegance of the Hough Transform2019-06-25T00:00:00+00:002019-06-25T00:00:00+00:00http://blog.neerajadhikari.com.np/math/hough-transform<script type="text/javascript" src="/assets/js/hough-transform-script.js"></script>
<p>Given an image as a bitmap, essentially a grid of pixel values, how do you find
out if there is a circle in it? Or a square? A line? In recent times deep
learning methods, especially convolutional neural networks have been really
successful for tasks like recognizing objects and shapes in images. That is a
good choice for detecting if an image contains, say, a dog, or a car. But for
simpler geometrical shapes, image processing techniques are often sufficient.
In this post I want to explain one of those techniques, called the Hough
Transform, that I think is particularly elegant.</p>
<p>Other than a point, a line is the simplest of two-dimensional shapes.
Naturally, finding whether an image contains points is not a very interesting
problem. So here we will deal with the problem of finding lines in an image. It
turns out that finding other, more complicated shapes is more computationally
intensive and also less effective, so the case of lines is both more suitable
for illustration and more widely used in practice.</p>
<p>[Say somewhere that what we will be finding is the so-called “analytic
representation” of the shape, something that mathematically describes it. For a
line in 2d space, that is a linear equation in two variables. For a circle, it
is a combination of two values for the center and another for radius]</p>
<p>Here we will deal with <em>binary</em> images, or images whose pixels are either black
or white, without any in-between values or different color channels. Obviously
if you want to perform line detection on grayscale or color images, appropriate
processing steps must be applied to convert it to a binary image. Also, we will
consider black pixels to be ‘foreground’ and white pixels to be ‘background’ so
that the correct lines are detected. Also, we should note that here we will
extract what is called an analytic representation of shapes, a set of
parameters that mathematically describe a shape. That is in contrast to
approaches which try to find the set of pixels in a given image which are part
of the shape.</p>
<p>To get started, consider an image where there are just two points. There is
only one possible line that passes through two points, so our technique will
have to find that line. Here is an illustration of the points and the line we
need.</p>
<!-- _includes/image.html -->
<div class="image-wrapper">
<img src="/assets/images/hough-transform/line-through-points.png" alt="Line through two points" />
<br />
<br />
<p class="image-caption"><i>A line (gray) passing through two points(black).</i></p>
</div>
<p>In 2-D space, a point is a pair of numbers, the x-coordinate and the
y-coordinate. A line is analytically described by the (aptly named) linear
equation in x and y. For simplicity, let’s assume that the lines are
represented by an equation of the form <script type="math/tex">y = mx + c</script>. Take <script type="math/tex">y = 2x + 1</script>, for
example. An infinite number of points fall on the line, and whether a point
falls on a line can be checked by finding out if the co-ordinates of the point
satisfies the equation of the line. Here, the point <script type="math/tex">(3, 7)</script> falls on the
line because <script type="math/tex">7 = 2\cdot3 + 1</script>. But because there are an infinite number of
possible lines, we can’t go through all the lines and check whether any one
contains both of our points. This is where the Hough Transform comes to our
rescue. Because the equation of a line is of the form <script type="math/tex">y=mx+c</script>, there are two
numbers that represent a line. Two values. Does that ring a bell? Why, that’s
just what a point is. The <script type="math/tex">m</script> and the <script type="math/tex">c</script> of a line form a point in a 2-D
space different than our original x-y space, call it the <em>parameter space</em>.
Just as a line in our x-y space is a point in the parameter space, a point in
the x-y space is a line in the parameter space. If we have a point
<script type="math/tex">(x_1,y_1)</script>, then substituting it into an equation of a line gives us <script type="math/tex">y_1 =
mx_1 + c</script>, which can be rearranged as <script type="math/tex">c = -mx_1 + y_1</script>, giving us a line in
the parameter space with a <strong>slope</strong> of <script type="math/tex">-x_1</script> and a y-intercept of <script type="math/tex">y_1</script>.
It is because of this space-changing that the Hough Transform is a
<strong>transform</strong>.</p>
<p>The two points that we have in our image therefore make two lines in the
parameter space. If the two lines are not parallel, we can solve the two
equations to find the point where they intersect. And the where they intersect,
well, that’s a point in the parameter space, thus, a line in the x-y space. And
there we have our line! If you are thinking, “Well, I guess that makes sense
but <em>why exactly</em>?”, then hold on. Let’s have a look at the equations once
again. Call our points <script type="math/tex">(x_1,y_1)</script> and <script type="math/tex">(x_2,y_2)</script>. The equations of the
lines they form in the parameter space are <script type="math/tex">y_1 = mx_1 + c</script> and
<script type="math/tex">y_2 = mx_2 + c</script> respectively. The point in line space where they intersect
is such a pair <script type="math/tex">(m_1,c_1)</script> that satisfies both of the equations. But saying
that a point <script type="math/tex">(m_1,c_1)</script> satisfies both equations is exactly the same
(algebraically) as saying that two points <script type="math/tex">(x_1,y_1)</script> and <script type="math/tex">(x_2,y_2)</script>
satisfy the equation <script type="math/tex">y=m_1x+c_1</script>. Both of these statements mean that the
equations <script type="math/tex">y_1 = m_1x_1 + c_1</script> and <script type="math/tex">y_2 = m_1x_2 + c_1</script> hold.</p>
<!-- _includes/image.html -->
<div class="image-wrapper">
<img src="/assets/images/hough-transform/intersection.png" alt="Two lines intersecting at a point." />
<br />
<br />
<p class="image-caption"><i>Two lines (gray) intersecting at a point (black).</i></p>
</div>
<p>All fine and dandy, except we’ve got a few problems. First, our line equation
of the form <script type="math/tex">y=mx+c</script> cannot represent vertical lines, because the <script type="math/tex">m</script> in
the equation is the slope of the line and vertical lines have no finite slope.
Second, whe we have a lot of points (in x-y space), that means there will be a
lot of lines in the parameter space. A lot of lines means a lot of computation,
because we have to check each line-pair to find if they intersect. So to bring
the problem of finding lines from the mathematical realm to the computational
realm, we make a few modifications to our technique. To get rid of the vertical
line problem, we use a different method of representing lines - the polar
equation. A polar equation is an equation of the form</p>
<script type="math/tex; mode=display">\rho = x\cos\theta + y\sin\theta</script>
<p>Here, <script type="math/tex">\rho</script> is the perpendicular distance (the shortest distance) of the
line from the origin and <script type="math/tex">\theta</script> is the angle that the line connecting <em>our</em>
line and the origin makes with the x-axis. Here is an illustration:</p>
<!-- _includes/image.html -->
<div class="image-wrapper">
<img src="/assets/images/hough-transform/polar.png" alt="A line in polar form" />
<br />
<br />
<p class="image-caption"><i>A line and its polar parameters ρ and θ.</i></p>
</div>
<p>Notice that now, a point in x-y space will no longer be a line in parameter
space. It is now a sum of two sinusoids. The sum can be expressed as a single
sinusoid (with different ‘origin’ and ‘scale’):</p>
<script type="math/tex; mode=display">\rho = R\cos(\theta - \alpha)</script>
<p>where <script type="math/tex">R = \sqrt{x^2 + y^2}</script> and <script type="math/tex">\alpha = cos^{-1}(R/x) = sin^{-1}(R/y)</script>.
So instead of having to find intersections of lines in parameter space, we now
have to find intersections of sinusoidal curves, which is not nearly as
straightforward as solving linear equations. Thankfully, we will now discuss a
method that solves this problem without having to solve equations.</p>
<p>For the problem of having to find lots of intersections between lines or
curves, we do something called quantization - we consider our parameter space
to be discrete, allowin only a fixed number of possible values for each
parameter. This allows us create an <em>accumulator</em>, a 2D array of points
in the parameter space, each cell of which keeps a count of how many <em>parameter
curves</em> pass through it. For every point we have in our x-y space, we create a
parameter curve in the parameter space and increment the value of each cell in
the accumulator through which the parameter curve passes. So every point in the
x-y space corresponds to a ‘1’ added to many cells in the accumulator. In the
end the values in the accumulator tell us how many points in the x-y space pass
through the line that is represented by that cell’s point in parameter space.
If a certain cell’s value crosses a fixed threshold, we take its co-ordinates
(parameters) and use them to determine a line in x-y space. That determined
line is the line we have just detected. So, there we have a line detector!</p>
<h1 id="an-interactive-demonstration">An interactive demonstration</h1>
<div style="text-align: center">
<!--<canvas id="modifiable-grid" style="width:50%; border: 1px solid black;"></canvas>-->
<canvas id="modifiable-grid" style=" border: 0px solid black;"></canvas>
<canvas id="parameter-space" style=" border: 1px solid black;"></canvas>
<p style="text-align: center">
<input type="button" id="clearbutton" value="Clear" />
</p>
<script type="text/javascript">
const width = document.getElementsByTagName("article")[0].offsetWidth *
0.40;
const input = document.getElementById("modifiable-grid");
input.width = width;
input.height = input.width;
const { grid : inputGrid, context : inputContext,
view : inputGridView } = setupGrid(input, 40, 40, true, true, false);
const output = document.getElementById("parameter-space");
output.width = width;
output.height = output.width;
const { grid: outputGrid, context : outputContext,
view : outputGridView } = setupGrid(output, 60, 60, false, false, true);
const radianize = (X) => ((X * 2 * Math.PI) / 60.0);
const Y_SCALE = 2;
inputGrid.addPixelChangeCallback((x, y, on) => {
const func = X => Y_SCALE * ((x - 20) * Math.cos(radianize(X))
+ (y - 20) * Math.sin(radianize(X)));
if (on) {
outputGrid.drawFunction(func);
} else {
outputGrid.clearFunction(func);
}
});
let drawnOverlays = new Array();
const indexArrayInArray = (mainArray, elementArray) => {
for (let i = 0; i < mainArray.length; i++) {
for (let j = 0; j < elementArray.length; j++) {
if (mainArray[i][j] == elementArray[j]) {
return i;
}
}
}
return -1;
};
outputGrid.addPixelChangeCallback((x, y, on, value) => {
const index = indexArrayInArray(drawnOverlays, [x, y]);
const theta = radianize(x);
const r = y / Y_SCALE;
if (value >= 8 && index == -1) {
inputGridView.drawOverlayLine(
X => 20 + (r - (X - 20) * Math.cos(theta)) / Math.sin(theta), 1);
drawnOverlays.push([x, y]);
console.log(drawnOverlays);
}
});
document.getElementById("clearbutton").onclick = function() {
outputGrid.clear();
outputGridView.clear();
inputGrid.clear();
inputGridView.clear();
}
</script>
</div>
<p><em>Here is a demonstration of the Hough Transform in operation. On the left there
is a grid representing x-y space that you can click on to add or remove points.
On the right is a representation of the parameter space for polar representation of lines.
If you add a point in the left grid, the corresponding parameter-space curve
will show up on the right. The points formed in parameter space by the curves
will accumulate on top of one another and grow darker, as more points on the
same line in x-y space are added. Once there are eight points overlapping in
the right grid, that is considered enough to detect a line in the x-y space and
the line appears on the left grid which passes through the points you have
drawn. The left grid goes from -20 to +20 in both axes, and the origin is in
the center. It runs from 0 to 2 times PI horizontal and 0 to 30 vertical, and
the origin is at the left bottom.</em></p>
<h1 id="finding-other-shapes">Finding other shapes</h1>
<p>So that’s how we find lines in an image but how about other shapes? Let’s
consider the circle. There are three parameters that define a circle - the
radius, and the x and y co-ordinates of the center. Thus, the parameter space
is three-dimensional here instead of two-dimensional. Our accumulator array is
likewise going to be a three-dimensional array. What would a point in x-y space
be in the circle parameter space? Let’s see. The equation of a circle is of the
form:</p>
<script type="math/tex; mode=display">(x - x_c)^2 + (y - y_c)^2 = r^2</script>
<p>where <script type="math/tex">(x_c,y_c)</script> are the co-ordinates of the center and <script type="math/tex">r</script> is the radius.
When we fix the x and y co-ordinates of a point we have in our x-y space, say
<script type="math/tex">(x_1,y_1)</script>, and treat the parameters <script type="math/tex">x_c</script>, <script type="math/tex">y_c</script> and <script type="math/tex">r</script> as
variables, what we have is an equation for a 3-D shape called a double cone,
two regular cones joined at their apexes, extending infinitely to both sides.</p>
<script type="math/tex; mode=display">r^2 = (x_1 - x_c)^2 + (y_1 - y_c)^2</script>
<p>Written with the usual letters used for 3-D cartesian co-ordinates,</p>
<script type="math/tex; mode=display">z^2 = (x_1 - x)^2 + (y_1 - y)^2</script>
<!-- _includes/image.html -->
<div class="image-wrapper">
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/7/72/DoubleCone.png/509px-DoubleCone.png" alt="A double cone" />
<br />
<br />
<p class="image-caption"><i>A double cone (not infinitely extended).</i></p>
</div>
<p>To ‘plot’ the quantized 3D double-cone in the accumulator, we increment values
in the cells that correspond to points that lie on the double cone. Then, just
like we did with lines, we find the cells with a large number of double cones
passing through them and use their co-ordinates as parameters for our detected
circles in x-y space. Note that points don’t have to form a full circle to be
detected by this method - even relatively short arcs will be detected, because
points in the same arc satisfy the same circle equation.</p>
<p>Similarly, for shapes that need more parameters to describe them, we need
parameter spaces of higher and higher dimensions and working with the
accumulator soon becomes unweildy. Squares, like circles, need three
parameters, rectangles and ellipses need four. So use of the Hough Transform in
practical applications is usually confined to finding lines.</p>Observations of a recovering procrastinator2019-03-23T00:00:00+00:002019-03-23T00:00:00+00:00http://blog.neerajadhikari.com.np/observations-recovering<p>On November the 10th, 2018 I published a post on this blog as a ‘commitment
device’, promising to myself (and to the internet, I guess?) that I would
publish one post every week on the blog for four months. It has been well past
four months, and I published 9 posts in that time. That averages to about one
post every 14 days. So obviously, this endeavor has been a failure. But in
these four or so months, I have learnt things about myself and about this
Achilles’ heel of human nature called procrastination. I’ve not yet overcome
procrastination, I’ve got long long way to go. So take anything that sounds
like advice on this post with a grain of salt. But I think I’m at least walking
in the right direction, slowly making progress, taking the tiniest step towards
recovery each day. Here are a few observations I’ve made in this time:</p>
<h3 id="humans-understand-themselves-very-little">Humans understand themselves very little</h3>
<p>It has been some time since I started a habit of meditating a little every
morning, and even though I’m really bad at it, there are few things that
meditation has helped me about. The kind of meditation I do involves trying to
hold your attention on something (usually your breathing) and observing the
thoughts that your minds drifts to, gently bringing yourself back to the
subject of your attention. Essentially, this is turning your mind’s eye inwards
and towards your own thoughts and feelings. Since starting, I’ve come to
understand better some crucial things about myself - most importantly, the fact
that how little I understand about myself. Other things I realized are how even
a small lack of sleep makes me really bad at focusing on things. Or how social
media and online news takes my mind from relative calmness to an anxious and
jittery state.</p>
<p>At times I am surprised by how unaware I am of things like even how tired my
body is and how that’s affecting my emotional state. Throughout the day, I’m
paying very little attention to what kinds of feelings I am feeling or what
thoughts I am having. And I suppose most of us are the same. While developing
the skill of awareness and mindfulness is one way to better understand
yourself, another really good way is to read. Read anything about how the brain
works, how the body works, what activities are good for mental and bodily
health, anything that is backed by proper science. We as a species know a tiny
amount compared to what there is to know, but what we can do right now is work
with what we have.</p>
<h3 id="what-you-do-now-will-be-a-tiny-bit-easier-the-next-time">What you do now will be a tiny bit easier the next time</h3>
<p>Anything you do is practice. The next time you do it, doing the exact same
activity will be a little bit easier. Repeat it enough times, it becomes a
habit. We tend to understand the ‘becoming a habit’ part easily enough, but I
think that the key insight to have is anytime you’re doing something bad for
yourself , you’re making it harder for yourself to not do that thing the next
time. For things that you don’t want to be doing forever, as time progresses,
what you want to be doing becomes more and more difficult. If you feel the
temptation to binge on your favorite TV Series while your exams are near, the
next time you feel the temptation it will be easier for you to give in instead
of to resist. However, if you manage to resist, resisting will be easier the
next time than it is now. Thinking of habits - especially bad ones - in this
way is something really powerful, and it has certainly helped me.</p>
<h3 id="addictive-stuff-should-be-identified-and-stayed-away-from">Addictive stuff should be identified and stayed away from</h3>
<p>There are so many things in our life that have the potential to be addicting,
and so many things we are already addicted to without realizing that it’s a
problem. As Paul Graham writes in his <a href="http://www.
paulgraham.com/addiction.html">wonderful essay</a>, as technology as developed, there are more and
more addictive things around us. The things that are already addictive are
getting more so. For a chronic procrastinator, these new-age addictive stuff
like video games, social media and internet porn can be the largest share of
their wasted time. In my opinion, the only thing we can do to deal with these
is to identify the most addictive stuff and stay religiously away from them.
For me, Reddit and Youtube are the most problematic. Nowadays I seldom open
Reddit and have decreased the amount of time I spent on YouTube by a lot. Even
doing this much has been really difficult and have required a lot of failed
attempts. But it’s not like there’s any other way than to keep trying.</p>
<h3 id="managing-time-is-something-new">Managing time is something new</h3>
<p>What I have recently discovered is that having been a chronic procrastinator
for so long, my long-term perception of time is very distorted. Estimating how
long some task will take, knowing when it’s too late to finish something in
time, how much free time I really have - these are things I’m horribly bad at.
Probably because chronic procrastination has never let me do any of these
properly. I didn’t have to learn to make such choices and estimates, because
procrastination made all the choices for me, even if they were the worst
possible choices. In other words, I was too busy procrastinating to have any
time to manage. So once you’ve got a little better at not procrastinating, the
problem you’ll soon face is that of being absolutely unaware how to manage time
effectively. I think I am at this stage, slowly learning a better model of how
time passes, how long things take, and what should be done when.</p>
<h3 id="there-will-be-struggle-a-lot-of-it">There will be struggle, a lot of it</h3>
<p>Trying to change your long-standing habits is essentially trying to fight what
you have been for your entire life. I think about for how long I have been the
way I have been, and how crazy it is to expect my behavior and habits to change
in mere weeks. In the beginning, during your first attempts at changing your
habits, the overwhelmingly more probable outcome is that you are going to fail,
and fail spectacularly. The important thing is not to beat yourself over
failures and try again with more resolve and from a more informed state of
mind. Throughout the early stages, it is going to be really hard, and you are
going to fail a lot. Even if you learn a little about yourself and the nature
of procrastination and addiction, all the time you spent trying wasn’t wasted.
You got to keep trying, because this is not a fight you can give up on.</p>On November the 10th, 2018 I published a post on this blog as a ‘commitment device’, promising to myself (and to the internet, I guess?) that I would publish one post every week on the blog for four months. It has been well past four months, and I published 9 posts in that time. That averages to about one post every 14 days. So obviously, this endeavor has been a failure. But in these four or so months, I have learnt things about myself and about this Achilles’ heel of human nature called procrastination. I’ve not yet overcome procrastination, I’ve got long long way to go. So take anything that sounds like advice on this post with a grain of salt. But I think I’m at least walking in the right direction, slowly making progress, taking the tiniest step towards recovery each day. Here are a few observations I’ve made in this time:Brave New World - A Review2019-02-14T00:00:00+00:002019-02-14T00:00:00+00:00http://blog.neerajadhikari.com.np/review/brave-new-world<p><em>This is a book review for Alduous Huxley’s classic <strong>Brave New World</strong> that I
originally <a href="https://www.goodreads.com/
review/show/2265235161">posted on GoodReads</a>.</em></p>
<p>The year is A.F. 632 and the World Controllers have created a perfectly stable
society. Viviparous reproduction is no longer allowed and humans are grown from
lab-fertilized embroyos in bottles which precisely simulate the conditions
required for the growth of babies of a whole range of castes, from the epsilon
semi-morons to the top alpha-plus-plusses. The children are conditioned
through neo-pavlovian training and hypnopaedic repetition, ensuring that the
thoughts and instincts they will have will make them happy and the society
stable. Everyone is happy now, and everyone belongs to everyone else.</p>
<p><i>Brave New World</i> is probably the most important book I have ever read.
Perhaps not the book I have enjoyed the most, but important all the same.
Important in the sense that it is one of those books that everyone should read.
There’s the saying that people who don’t study history are doomed to repeat it.
While maybe that is true, I think good works of dystopian (or is it utopian?)
fiction are even more important, because they warn us about possible futures
before we have to suffer them. In a sense, they present us ‘alternative
histories’ to learn from. They stand as glaring examples of what a society
should not be like. If well-read and well-understood, they guide us away from
resemblance with their worlds. No wonder that “Orwellian” is such a common, and
such a powerful word in modern-day discourse.</p>
<p>The book is also important because it makes you think, it makes you question
your complacent place in the society. I found myself thinking about how much of
my thinking was really independent and how much was due to subliminal
‘conditioning’ that I wasn’t aware of. I thought about whether today’s tech
giants are the new world controllers, pulling the strings of their distraction
machines to condition us for their profit. I thought about how much evil we can
inflict on ourselves as a whole if doing so makes us feel good.</p>
<p>Besides the immense philosophical significance, the book is also great for its
style. Huxley’s writing is vivid and evocative, and that can be felt right from
the first paragraph. One section I really liked was the part when he uses the
text equivalent of the ‘cross-cutting’ used in cinema. Mustapha Mond lecturing
the students, Bernard seething at his colleagues’ comments, Lenina talking to
her friend in the changing room, and all three scenes narrated simultaneously,
alternating lines devoted to each. And I found the whole Fordianism thing
amusing and a bit hilarious, from people gasping ‘Oh Ford!’ to words like
‘Fordship’ and ‘Unfordly’, and of course the holy ‘T’.</p>
<p>I finished reading <i>Brave New World</i> 41 days after starting it for the
second time. During this time I avoided opening GoodReads, lest it should
remind me of my procrastination. There were week-long stretches during which
I didn’t open the book at all. After years of being a chronic procrastinator
without fully realizing how bad a state I was in, I have recently started to
be started to be more aware of my thoughts and actions, and have taken major
steps towards recovery. However, the experience of reading this book was a
painful reminder - in multiple ways - of how fallible the human mind is
and most importantly of our incredible capacity to forgo and forget everything
important when in comfort, when in that blissful cocoon of pleasing
distractions.</p>This is a book review for Alduous Huxley’s classic Brave New World that I originally posted on GoodReads.Algorithmic Poems2019-01-14T00:00:00+00:002019-01-14T00:00:00+00:00http://blog.neerajadhikari.com.np/algorithms/algorithmic-poems<p>While there are now algorithms left and right writing poetry (or, depending on
your opinion, writing stuff that is not quite poetry), let’s talk about
something even more interesting - poetry that describe algorithms.
Surprisingly, I couldn’t find more than a few such poems. It is difficult
enough to exactly and concisely express most algorithms even in prose, so
perhaps it is not so surprising after all. But simpler, elegant algorithms are
relatively easier to express in prose and as we shall see, poetry. As we can
expect, these poems do not provide a complete specification of algorithms, but
nevertheless describe their general working very well. Here I list the ones
that I have found, and I’ll keep expanding this list as I hopefully find more
in the future.</p>
<h2 id="sieve-of-eratosthenes">Sieve of Eratosthenes</h2>
<blockquote>
<p>Sift the Two’s and Sift the Three’s,<br />
The Sieve of Eratosthenes.<br />
When the multiples sublime,<br />
The numbers that remain are Prime.<br /><br />
– Anonymous</p>
</blockquote>
<p>The <a href="https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes">sieve of
Eratosthenes</a> is an
ancient algorithm for finding all primes up to a given limit. It is a simple
algorithm, discovered by the Greek mathematician Eratosthenes of Cyrene
millennia before there were computers or even the formal study of algorithms.
It works by starting with a list of all natural numbers less than a limit and
crossing out multiples of discovered primes one by one, starting with two. Once
there are no bigger primes to cross out multiples off, the algorithm ends and
all numbers that remain unmarked are primes.</p>
<h2 id="gradient-descent">Gradient Descent</h2>
<blockquote>
<p>Still the fog persists.<br />
Let the incline have its way<br />
and set your compass.<br /><br />
Keep taking footsteps<br />
until that first suspicion<br />
of an uphill slope<br /><br />
then turn left or right,<br />
just one of many zig-zags.<br />
Will it ever end?<br /></p>
</blockquote>
<p>This poem by Michael Bartholomew-Biggs appears in his paper titled <a href="https://www.researchgate.net/publication/
282948019_Poetry_Algorithms">Algorithms
and Poetry</a>, along with other poems describing algorithms for
optimization problems. This one describes the gradient descent or steepest
descent method of finding the minimum of a function in possibly
high-dimensional space. It is the most common method used for optimizing neural
network weights. It works by starting at some initial position and gradually
moving in the direction (in the input space) of the steepest descent of the
function value. The algorithm stops at the “first suspicion of an uphill
slope”, or the point from which there is no way downhill. The fog in the poem
refers to the fact that at a time we can only ‘observe’ the immediate
surroundings of the point we are at, instead of the entire topography.</p>
<h2 id="spanning-tree-protocol">Spanning Tree Protocol</h2>
<blockquote>
<p>I think that I shall never see<br />
A graph more lovely than a tree.<br /><br />
A tree whose crucial property<br />
Is loop-free connectivity.<br /><br />
A tree which must be sure to span<br />
So packets can reach every LAN.<br /><br />
First the root must be selected.<br />
By ID it is elected.<br /><br />
Least cost paths from root are traced.<br />
In the tree these paths are placed.<br /><br />
A mesh is made by folks like me<br />
Then bridges find a spanning tree.</p>
</blockquote>
<p>This one by <a href="https://en.wikipedia.org/wiki/Radia_Perlman">Radia Perlman</a> is my
favorite. It describes the algorithm for the <a href="https://
en.wikipedia.org/wiki/Spanning_Tree_Protocol">Spanning Tree Protocol</a>, which is used implemented by
layer-2 bridges in a network which may have cycles in the topology which lead
to <em>broadcast radiation</em>, the situation when broadcast frames flood the
network. To prevent that, the bridges compute a loop-free subset of the network
topology that spans the entire network, in other words, a spanning tree. The
algorithm runs on each bridge in the network, and they communicate by sending
out configuration messages to their neighboring bridges. First they agree on a
root bridge for the tree based on an ID derived from the MAC address, and keep
the links that provide shortest distance paths to the root bridge in the
spanning tree. The links not in the spanning tree are deactivated.</p>
<p>While working at Digital Equipment Corporation (DEC), Radia Perlman was tasked
with developing a constant-memory protocol that enabled bridges to locate loops
in a local area network. Legend says that she was given a week to finish the
job, but she invented the protocol in a day and spent the rest of the time
writing this poem.</p>
<p><em>Do you know of, or have yourself written, a poem that describes an algorithm?
If so, send it to me and I’ll add it to the list.</em></p>While there are now algorithms left and right writing poetry (or, depending on your opinion, writing stuff that is not quite poetry), let’s talk about something even more interesting - poetry that describe algorithms. Surprisingly, I couldn’t find more than a few such poems. It is difficult enough to exactly and concisely express most algorithms even in prose, so perhaps it is not so surprising after all. But simpler, elegant algorithms are relatively easier to express in prose and as we shall see, poetry. As we can expect, these poems do not provide a complete specification of algorithms, but nevertheless describe their general working very well. Here I list the ones that I have found, and I’ll keep expanding this list as I hopefully find more in the future.Paying Attention2019-01-07T00:00:00+00:002019-01-07T00:00:00+00:00http://blog.neerajadhikari.com.np/freedom/paying-attention<p>Most days, the first thing I do after I wake up is reach for my phone. It is
almost always at my bedside, because I have formed the unfortunate habit of
having to scroll through or read something on my phone to fall asleep. As an
occasional exception, the phone is left charging, on the table by my bedside.
And with my mind still groggy, I check for notifications and open either
Instagram, or Reddit, or Twitter, any app that promises me fresh content to get
amused with, all that has happened in the eight or so hours I have been asleep.</p>
<p>Through the course of the day, countless times, I find myself reaching for my
phone, opening the app drawer, and before I am even conscious of what I am
doing, my fingers have opened one of those ‘fresh content’ apps and are
dragging, scrolling, tapping. Sometimes, I realize what is happening and
hurriedly close the app and return my phone to where it was. Most of the times,
however, it is too late - the realization is futile - and I end up caught in an
endless rabbit hole of ‘fresh content’, down, down, with no end in sight.
Anywhere from a few minutes to hours. Gone, without anything of import
happening. Not really here, not really anywhere. Just consuming, consuming all
the endless novelty that our poor primate brains are hapless against. We never
evolved for this.</p>
<p>A thing I have noticed is that most of these spontaneous reaching-for-the-apps
happens when I have to do some work and hit a small obstacle. Something forces
me to step back and think, or I have to do something relatively unpleasant or
icky. No, not even unpleasant. Just something that is a trifle difficult than
what I’m doing, something a hair’s width out of my comfort zone. The the hand
moves away from the keyboard (or paper), reaches instinctively for the phone
conveniently lying nearby, and I forget what was it that I was having
difficulty with, and forget what I was doing at all. The times when I mindfully
and intentionally open these ‘fresh content’ apps are relatively rare.
Amazingly, I think I have been so conditioned to forgetting my intentions when
unlocking my phone that I sometimes forget what I’m trying to do on my phone
even when I open it with an intention to do something specific.</p>
<p>What is happening here? Some of these apps have been deliberately designed to
be like this, attention-sucking, addictive. Instagram, Facebook, Twitter, all
the free-to-play videogames. They are using well-known <a href="https://www.psychologytoday.com/us/blog/brain-wise/201311/
use-unpredictable-rewards-keep-behavior-going">results in psychology</a> to game us into spending more
and more of our time. Users spending more time on them means more data on user
preferences, more opportunities to show ads, more profit. The profit-making
companies will maximize profit, of course. And since the apps themselves are
free, we are the product, or rather our eyes, our attention. They deliver what
we crave, keep us hooked, and sell a chunk of our attention to advertisers. We
are not just paying attention, but paying with attention. But the funny thing
is, in the midst of all these companies working to keep us hooked, the platform
that sucks up of most of my attention is one that isn’t deliberately trying -
Reddit. Though it is a for-profit company, Reddit hasn’t made any profit yet
and its basic features are the same as when it was created 13 years ago.
Letting users post links, letting users comment on them, letting users create
deep comment threads. And somehow, Reddit is the biggest source of endlessly
fresh content. Post after funny post, comment after hilarious comment, and a
healthy dose of outrage to spice it all up. Whether by design or by accident,
one thing is certain - I am losing control, I am relinquishing attention. We
all are.</p>
<p>And I think I’ve lost a great deal more. One day, some months ago, I set my
phone aside and just sat in a chair for five minutes, doing nothing. And when I
did that I realized how rare that had become, that doing nothing, that being
with yourself, that letting the mind wander. The phone is in my hand and on my
eyes before solitude and contemplation have a chance to settle in. My brain is
addicted to distraction. It may be just a co-incidence, but the rise of my
distraction addiction coincides with the rise of my chronic procrastination.
How much has it changed me, how much has it rewired my brain? I fondly remember
those times in the past - times when I used to be able to do work I enjoy in a
flow state, without a great deal of difficulty. Also called ‘being in the
zone’. The movie ‘Social Network’ called it being ‘wired in’, in the sense that
the programmer is so engrossed in their work that they feel like a part of the
computer themselves. I loved that phrase, and enjoyed being ‘wired in’. Now
there are too many distractions, or rather I am too susceptible to
distractions. Reaching the flow state is too difficult now, doing anything
important is too difficult. What is easy is getting distracted by yet another
suggested Youtube video, yet another Instagram story, yet another meme on
Reddit.</p>
<p>Before these distraction machines change our brains irreversibly, we have to do
something. We have to put them on a leash, to limit their usage somehow, to
deny them access to our mind-space. If we can’t let go entirely, at least
maintain the distance. Maintain a healthy relationship, so to speak. When the
other party takes special care to abuse your weaknesses for their profit,
maintaining a healthy relationship is especially difficult. But all the more so
important.</p>
<p>P.S. Here is a list of tools I use</p>
<ol>
<li><a href="https://addons.mozilla.org/en-US/firefox/addon/leechblock-ng/">LeechBlock</a>
plugin for Firefox and
<a href="https://chrome.google.com/webstore/detail/
stayfocusd/laankejkbhbdhmipfmgcngdelahlfoji">StayFocusd</a> for Chrome.</li>
<li>DF Tube (distraction free youtube) plugin for <a href="https://
addons.mozilla.org/en-US/firefox/addon/df-youtube/">Firefox</a> and <a href="https://
chrome.google.com/webstore/detail/df-tube-distraction-free/
mjdepdfccjgcndkmemponafgioodelna?hl=en">Chrome</a></li>
<li><a href="https://play.google.com/store/apps/details?
id=co.blocksite&hl=en_US">Block Site</a> app for Android</li>
</ol>
<p>Here is a video on <a href="https://www.youtube.com/
watch?v=VpHyLG-sc4g">digital hygiene</a>, and here is a series of
<a href="https://www.youtube.com/playlist?
list=PLQwg0PxpUPlqyRdl_93W4oLLfCW4GAGB-">videos on the attention economy</a>.</p>Most days, the first thing I do after I wake up is reach for my phone. It is almost always at my bedside, because I have formed the unfortunate habit of having to scroll through or read something on my phone to fall asleep. As an occasional exception, the phone is left charging, on the table by my bedside. And with my mind still groggy, I check for notifications and open either Instagram, or Reddit, or Twitter, any app that promises me fresh content to get amused with, all that has happened in the eight or so hours I have been asleep.Quitting Facebook and the future of social networks2018-12-16T00:00:00+00:002018-12-16T00:00:00+00:00http://blog.neerajadhikari.com.np/freedom/quitting-facebook<p>A little more than two years ago, the summer of 2016, I deactivated my Facebook
account. I had been planning to leave Facebook for some time then, but due to
one thing or the other I had hesitated from making the decision. I did not
delete my account right away - just deactivated it - so that I could just log
back in if something came up and I needed it. But after that, I never felt a
need to get back on Facebook. It felt kind of freeing, to tell the truth. After
that one story after another broke about Facebook’s data scandals, and they
strengthened my resolve to not go back. Pretty soon, Facebook changed from
being siginificant part of my day-to-day thoughts to something stored in drawer
at the corner of my mind and forgotten about.</p>
<p>All this time, I constantly put off actually deleting my account. A little less
than two weeks ago, I opened Facebook to get in touch with one of my professors
at <a href="http://pcampus.edu.np/">Pulchowk</a> after attempts to contact him by email,
text and phone had failed. I thought it would be a good opportunity to finally
delete my account. (Unsurprisingly, he did not respond on Facebook too, but I
later reached him through phone). And today I downloaded my personal data and
permanently deleted my account, ten years after first creating it.</p>
<p>There are a bazillion articles in the Internet telling you why you should quit
Facebook, but here I want to discuss something broader than that. <em>Are social
networks good for the world? Do we need them at all?</em> Of course, we do not need
them in the way we need hospitals or electricity. What I mean here is do we
need them badly enough to justify all of their failures and dangers? First,
lets start with Facebook itself. I don’t know whether it is a net positive or a
net negative for society and for the world, but it definitely was a net
negative for me. The costs clearly outweighed the benefits. I found myself
spending lots of time scrolling through the feed and ending up unhappy about
something, usually myself. More than half of my Facebook ‘friends’ were people
I barely knew. And in that feed were all of their political opinions and
depressingly cringey memes. And posts from the other half, the people I knew,
usually upset me for a different reason. All these people were doing cool
things with their life, traveling, accomplishing, sharing their shiny lives.
And I was sitting and scrolling, not sure about a whole lot of things in my
life. These were what one could call selfish reasons, but I had philosophical
objections too. I hold dear freedom, transparency and privacy, and Facebook
seemed to be the antithesis of these concepts.</p>
<p>Since I have quit Facebook, I feel that I have not missed out on much, and that
whatever I missed was not important. Almost everybody now uses one messaging
app or the other besides Facebook messenger, so texting and VoIP calls are not
a problem. You would survive just fine, probably even improve your well-being
not looking at all their vacation photos and political opinions. While we are
on well-being, let me remind you that Facebook once ran a <a href="https://www.theguardian.com/technology/2014/jun/29/
facebook-users-emotions-news-feeds">secret experiment on
its users</a> in which it tried to control their moods by
manipulating the feed shown to them.</p>
<p>One thing that made quitting Facebook a lot easier for me was Instagram. Most
of my contacts who I didn’t mind getting updates about were on Instagram, and
Instagram was a lot more bearable and less addictive then. One of my friends,
in good humor, called me out on the hypocrisy of quitting Facebook for
apparently philosophical reasons but continuing to use Instagram. That was a
time when the two were very much different, but now it is as though a certain
Facebook-ness is slowly creeping up on Instagram and transforming it. I don’t
know if it’s just me, but it feels as though Instagram’s addictiveness has
steadily increased in the past three or four years. I’m beginning to severely
limit my time on the app and have started thinking about whether I should
entirely quit it.</p>
<p>It seems to be the case that as social networks grow and need to chase ever
higher advertisement revenues, it becomes really lucrative to compromise the
users’ freedoms. All the data we unthinkingly submit to them reveals a lot
about us, and that can be, and has been, used to influence us so that we keep
coming back. It is immensely profitable to make the apps as addictive as
possible, to the point that we become dependent. Even when we realize all of
this, the same <a href="https://en.wikipedia.org/wiki/Network_effect">network effect</a>
that made the social network huge prevents us from outright quitting. All of
our friends are there, so the <a href="https://en.wikipedia.org/wiki/
Fear_of_missing_out">FOMO</a> is really strong. It can be difficult to sever all ties to
a social network that has been practically the center of our digital lives for
so long.</p>
<p>Come to think about it, among the social networks I have used, the only one I
find to be a clear net positive is Twitter. Perhaps we don’t <em>need</em> it, but at
the moment it’s not the worst thing to have. I have moments when I get
disappointed at all the noise - the crassness, adults I know tweeting like edgy
teens, and the occaisonal bit of extremist politics. But there a lot of nice
moments as well - stimulating conversations with people with keen viewpoints,
genuinely good memes, amazing twitter threads that people use as a substitute
medium for long-form writing. One thing that helps is that I have far lesser
people that I follow, compared to Instagram or Facebook friends. And when
Twitter started showing me tweets that people I follow liked, I use the mute
button liberally. But of course the biggest force at work is Twitter’s
fundamentally different mechanics that sets it apart from Facebook. If Twitter
does not suffer many more events of Facebook creep like the discontinuation of
chronological timelines, I think it will remain bearable.</p>
<p>So I think with the current state of affairs, we’d be better off staying away
from most social networks or at least limiting our time on them and how much we
share our data. Social networks are better when you have a circle of
closely-knit connections and the network does not shove endlessly addictive
content down your throat. Your online existence is entirely in the form of
data, and companies who don’t treat your data with respect and care <a href="https://www.buzzfeednews.com/article/charliewarzel/
why-after-2018s-privacy-scandals-does-facebook-deserve-our">don’t
deserve it</a>. A massive social
network run by a single profit-seeking corporation is probably not a good idea,
because they have ample incentives to exploit you and zero incentives to
respect your privacy. A federated social network backed by open source software
and open protocols would solve some problems but at the moment options like
<a href="https://en.wikipedia.org/wiki/Mastodon_(software)">Mastodon</a>/<a href="https://en.wikipedia.org/wiki/ActivityPub">ActivityPub</a> have difficult <a href="https://medium.com/@seanbonner/taking-a-ride-on-mastodon-4fe0c6e60e04k">problems</a> of
their own. Whatever the future holds, I think it is wise to remain skeptical
and maintain the distance.</p>A little more than two years ago, the summer of 2016, I deactivated my Facebook account. I had been planning to leave Facebook for some time then, but due to one thing or the other I had hesitated from making the decision. I did not delete my account right away - just deactivated it - so that I could just log back in if something came up and I needed it. But after that, I never felt a need to get back on Facebook. It felt kind of freeing, to tell the truth. After that one story after another broke about Facebook’s data scandals, and they strengthened my resolve to not go back. Pretty soon, Facebook changed from being siginificant part of my day-to-day thoughts to something stored in drawer at the corner of my mind and forgotten about.Proving 1 > 02018-12-10T00:00:00+00:002018-12-10T00:00:00+00:00http://blog.neerajadhikari.com.np/math/proving-one-greater-than-zero<p>How do you know that 1 is greater than 0? What a silly question! By intuition,
you might say. Zero means nothing, and one means a unit quantity of something.
And surely something is more than nothing. Something even an infant would know.
But that’s intuition, or common sense. Intuition is a powerful thing, but it
won’t get us very far in mathematics. That’s why we have mathematical rigor -
where we use axiomatic systems which have a set of statements that we consider
true (axioms), and then use rigid, very mechanical symbol-manipulation rules to
generate other statements that are true.</p>
<p>I am currently studying Michael Spivak’s <em>Calculus</em>, both to brush up my
calculus knowledge and because it is known for taking a very rigorous approach,
meticulously proving theorems that other books often state without proof. The
first chapter is on the properties of numbers. While it doesn’t start right
from the <a href="https://en.wikipedia.org/wiki/Peano_axioms">bottom</a> (it leaves out
properties of equality, for example), it presents of 12 properties from which
we can derive other interesting truths. If we assume just some basic properties
of addition, multiplication and the inequality symbols, and know nothing else
about numbers except what our properties and derived facts tell us, lets see
what we need to conclude that <script type="math/tex">1 > 0</script>.</p>
<p>Here are the 12 properties that we will consider true:</p>
<h3 id="addition-properties"> Addition Properties</h3>
<ol>
<li>
<p>If <script type="math/tex">a</script>, <script type="math/tex">b</script> and <script type="math/tex">c</script> are any numbers,</p>
<script type="math/tex; mode=display">a + (b + c) = (a + b) + c</script>
<p>This property is called the <em>associativity of addition</em> and it provides us
a means of adding more than two numbers, by saying that the order in which
you perform the individual additions does not matter.</p>
</li>
<li>
<p>If <script type="math/tex">a</script> is any number,</p>
<script type="math/tex; mode=display">a + 0 = 0 + a = a</script>
<p>This property states the existence of the number 0, indicates its defining
behavior. 0 is called the <em>additive identity</em> because it leaves numbers
unchanged when it is added to them. With this property have our first
actual, concrete number. The previous property talked of ‘any numbers’ and
used letters to denote them but provided no examples.</p>
</li>
<li>
<p>For every number <script type="math/tex">a</script>, there is a number <script type="math/tex">-a</script> such that</p>
<script type="math/tex; mode=display">a + (-a) = (-a) + a = 0</script>
<p>Here we establish a relationship between any (and every) number, its
<em>additive inverse</em> that we denote by a minus sign before the number, and 0.
For convenience, we can write <script type="math/tex">a+(-b)</script> as <script type="math/tex">a-b</script>.</p>
</li>
<li>
<p>If <script type="math/tex">a</script> and <script type="math/tex">b</script> are any numbers, then</p>
<script type="math/tex; mode=display">a + b = b + a</script>
<p>Essentially, the result of addition is the same, whatever the order of
numbers around the addition symbol. This property is known as the
<em>commutativity of addition</em>.</p>
<h3 id="multiplication-properties">Multiplication Properties</h3>
</li>
<li>
<p>If <script type="math/tex">a</script>, <script type="math/tex">b</script> and <script type="math/tex">c</script> are any numbers,</p>
<script type="math/tex; mode=display">a \cdot (b \cdot c) = (a \cdot b) \cdot c</script>
<p>This is the same as property 1, but for a new operation we call
multiplication and denote by <script type="math/tex">\cdot</script>. This is called the <em>associativity
of multiplication</em>.</p>
</li>
<li>
<p>If <script type="math/tex">a</script> is any number,</p>
<script type="math/tex; mode=display">a \cdot 1 = 1 \cdot a = a \\
Also, 1 \neq 0</script>
<p>Like property 2 does for addition, this property defines a <em>multiplicative
identity</em>. Muliply any number with <script type="math/tex">1</script>, and you leave the number unchanged.
The property also mentions that <script type="math/tex">1 \neq 0</script>, to differentiate the new
concrete number <script type="math/tex">1</script> that we have introduced here from the <script type="math/tex">0</script> that we
introduced earlier. Without this disclaimer, <script type="math/tex">1</script> could have been just
another symbol for <script type="math/tex">0</script>. There is nothing in the previous properties to
prohibit that, because this is the property that defines <script type="math/tex">1</script> for the
first time.</p>
</li>
<li>
<p>For every number <script type="math/tex">a \neq 0</script>, there exists a number <script type="math/tex">a^{-1}</script> such that</p>
<script type="math/tex; mode=display">a \cdot a^{-1} = a^{-1} \cdot a = 1</script>
<p>Similar to property 3, but this one defines a <em>multiplicative inverse</em>. And
a very important thing to note, multiplicative inverses are defined for all
numbers except <script type="math/tex">0</script>.</p>
</li>
<li>
<p>If <script type="math/tex">a</script> and <script type="math/tex">b</script> are any numbers,</p>
<script type="math/tex; mode=display">a \cdot b = b \cdot a</script>
<p>Like for addition, the order of the numbers you are multiplying does not
affect the result of multiplication. This is called the <em>commutativity of
multiplication</em>.</p>
<p>At the moment we know a lot about addition and multiplication, but there is
not much we can do. For example, we would be helpless if we wanted to prove
that <script type="math/tex">a \cdot 0 = 0</script> for any number a. That is because in the
properties we have seen, <script type="math/tex">0</script> only appears with addition and what we are
trying to prove involves multiplication. So we need something that relates
these two operations.</p>
<h3 id="a-property-of-both">A Property of Both</h3>
</li>
<li>
<p>If <script type="math/tex">a</script>, <script type="math/tex">b</script> and <script type="math/tex">c</script> are any numbers,</p>
<script type="math/tex; mode=display">a \cdot (b+c) = a \cdot b + a \cdot c</script>
<p>This is called the <em>distributive law</em>, and by tying addition and
multiplication together it enables us to prove what we want:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{eqnarray}
a \cdot 0 + a \cdot 0 &=& a \cdot (0 + 0) \nonumber \\
&=& a \cdot 0 \nonumber \\
\end{eqnarray} %]]></script>
<p>Adding <script type="math/tex">-(a\cdot0)</script> to both sides, we get <script type="math/tex">\mathbf{a \cdot 0= 0}</script></p>
<p>While we are here, lets prove two other facts we will need later. First,
<script type="math/tex">(-a) \cdot b = -(a \cdot b)</script>, which is simple:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{eqnarray}
(-a) \cdot b + a \cdot b &=& [(-a) + a] \cdot b \nonumber \\
&=& 0 \cdot b \nonumber \\
&=& 0 \nonumber \\
\end{eqnarray} %]]></script>
<p>Adding <script type="math/tex">-(a \cdot b)</script> to both sides, we get
<script type="math/tex">\mathbf{(-a) \cdot b = -(a\cdot b)}</script></p>
<p>Another fact we can prove is <script type="math/tex">(-a)\cdot(-b) = a\cdot b</script></p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{eqnarray}
(-a) \cdot (-b) + [- (a \cdot b)] &=& (-a) \cdot (-b) + (-a) \cdot b \nonumber \\
&=& (-a) \cdot [(-b) + b] \nonumber \\
&=& (-a) \cdot 0 \nonumber \\
&=& 0 \nonumber \\
\end{eqnarray} %]]></script>
<p>Adding <script type="math/tex">a \cdot b</script> to both sides, we get
<script type="math/tex">\mathbf{(-a) \cdot (-b) = (a\cdot b)}</script></p>
<p>A good set of facts we’ve got proved at the point, but absolutely no way of
doing anything with inequalities. Essentially, we know there are <em>at least</em>
two numbers, <script type="math/tex">0</script> and <script type="math/tex">1</script>, but have no notion of ‘greater than’ or
‘smaller than’. The next three properties will change this situation.</p>
<p>Instead of defining inequalities right away, it is more convenient to
define <script type="math/tex">P</script> as the set of all positve numbers, and state properties 10-12
in terms of <script type="math/tex">P</script>.</p>
<h3 id="inequality-properties">Inequality Properties</h3>
</li>
<li>
<p>If <script type="math/tex">a</script> is any number, then one and only one of the following is true:</p>
<p>i. <script type="math/tex">a = 0</script><br />
ii. <script type="math/tex">a</script> is in <script type="math/tex">P</script><br />
iii. <script type="math/tex">(-a)</script> is in <script type="math/tex">P</script></p>
<p>This is called the <em>Trichotomy Law</em>. It cleanly separates numbers into
three categories: <script type="math/tex">0</script>, numbers which are in <script type="math/tex">P</script> and numbers whose
additive inverses are in <script type="math/tex">P</script>.</p>
</li>
<li>
<p>If <script type="math/tex">a</script> and <script type="math/tex">b</script> are in <script type="math/tex">P</script>, then <script type="math/tex">a + b</script> is in <script type="math/tex">P</script>.</p>
<p>This is called the <em>closure of positive numbers under addition</em>.</p>
</li>
<li>
<p>If <script type="math/tex">a</script> and <script type="math/tex">b</script> are in <script type="math/tex">P</script>, then <script type="math/tex">a \cdot b</script> is in <script type="math/tex">P</script>.</p>
<p>This is called the <em>closure of positive numbers under
multiplication</em>.</p>
</li>
</ol>
<p>So now we have our 12 properties. It is time now to define the symbols <script type="math/tex">\lt</script>,
<script type="math/tex">\gt</script>, <script type="math/tex">\le</script> and <script type="math/tex">\ge</script>.</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{eqnarray}
a \gt b & \; \text{if} \; & a - b \; \text{is in} \; P \\
a \lt b & \; \text{if} \; & b \gt a\\
a \ge b & \; \text{if} \; & a \gt b \; \text{or} \; a = b\\
a \le b & \; \text{if} \; & a \lt b \; \text{or} \; a = b\\
\end{eqnarray} %]]></script>
<p>Hold on, because we are almost there. We need to prove one crucial fact first:
that if <script type="math/tex">a \lt 0</script> and <script type="math/tex">b \lt 0</script>, then <script type="math/tex">a \cdot b \gt 0</script>. In other
words, the product of two negative numbers is positive.</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{eqnarray}
& \Rightarrow & a \lt 0 \\
& \Rightarrow & 0 \gt a \\
& \Rightarrow & 0 \gt a \\
& \Rightarrow & 0-a \; \text{is in} \; P\\
& \Rightarrow & -a \; \text{is in} \; P\\
& \text{Similarly,} & -b \; \text{is in} \; P\\
& \Rightarrow & (-a)\cdot(-b) \; \text{is in} \; P\\
& \Rightarrow & a\cdot b \; \text{is in} \; P\\
& \Rightarrow & a\cdot b - 0\; \text{is in} \; P\\
& \Rightarrow & \mathbf{a\cdot b \gt 0}\\
& \end{eqnarray} %]]></script>
<p>We have proved that if <script type="math/tex">a \lt 0</script> and <script type="math/tex">b \lt 0</script>, then
<script type="math/tex">a \cdot b \gt 0</script>. But it is also true that <script type="math/tex">a \cdot b \gt 0</script> when
<script type="math/tex">a \gt 0</script> and <script type="math/tex">b \gt 0</script>, because in that case both <script type="math/tex">a</script> and <script type="math/tex">b</script> are
in <script type="math/tex">P</script> and thus <script type="math/tex">a\cdot b</script> is in <script type="math/tex">P</script> by Property 12. To say them both in
the same sentence, <script type="math/tex">a \cdot b \gt 0</script> when <script type="math/tex">a \lt 0</script> and <script type="math/tex">b \lt 0</script>
or <script type="math/tex">a \gt 0</script> and <script type="math/tex">b \gt 0</script>. In the special case where <script type="math/tex">a=b</script>,
<script type="math/tex">a^2\gt0</script> when <script type="math/tex">a \gt 0</script> or <script type="math/tex">a \lt 0</script>. By the trichotomy law, that is the
same as <script type="math/tex">a \neq 0</script>. And because we have defined <script type="math/tex">1</script> as being not equal to
<script type="math/tex">0</script>, and because <script type="math/tex">1^2 = 1\cdot1 = 1</script>, we can finally conclude what we
needed to:</p>
<script type="math/tex; mode=display">\mathbf{1 \gt 0}</script>
<h3 id="ps">P.S.</h3>
<p>It took a lot of properties to prove that <script type="math/tex">1 \gt 0</script>, but after proving just
two more facts, we can proceed to prove inequality relations for all integers.</p>
<p>If <script type="math/tex">a\lt b</script>, so <script type="math/tex">b-a</script> is in <script type="math/tex">P</script>, then
<script type="math/tex">(b-a)+(c-c) = (b+c) - (a+c)</script> is in <script type="math/tex">P</script>. Thus, <script type="math/tex">a+c \lt b+c</script>.</p>
<p>If <script type="math/tex">a\lt b</script> and <script type="math/tex">b \lt c</script>, then <script type="math/tex">b-a</script> is in <script type="math/tex">P</script> and <script type="math/tex">c-b</script> is in
<script type="math/tex">P</script>. By Property 11, <script type="math/tex">(c-b)+(b-a) = c-a</script> is in <script type="math/tex">P</script>. Thus, <script type="math/tex">a \lt c</script>.</p>
<p>Because we have <script type="math/tex">0 \lt 1</script>, by the former of these two facts,
<script type="math/tex">0 + 1 \lt 1 + 1</script>, or <script type="math/tex">1 \lt 1 + 1</script>. Instead of leaving <script type="math/tex">1 + 1</script> as it
is, we can use the well-known symbol for it, <script type="math/tex">2</script>, so <script type="math/tex">\mathbf{ 1 \lt 2}</script>.
Similarly, <script type="math/tex">1 + 1 \lt 2 + 1</script>, aka <script type="math/tex">\mathbf{2 \lt 3}</script> and so on. And we can
also compare non-consecutive numbers: becuase <script type="math/tex">0 \lt 1</script> and
<script type="math/tex">1 \lt 2</script>, <script type="math/tex">\mathbf{0 \lt 2}</script>. Similarly, we can derive inequalities for
the negative numbers, and <em>viola!</em>, an order is enforced on all integers.</p>How do you know that 1 is greater than 0? What a silly question! By intuition, you might say. Zero means nothing, and one means a unit quantity of something. And surely something is more than nothing. Something even an infant would know. But that’s intuition, or common sense. Intuition is a powerful thing, but it won’t get us very far in mathematics. That’s why we have mathematical rigor - where we use axiomatic systems which have a set of statements that we consider true (axioms), and then use rigid, very mechanical symbol-manipulation rules to generate other statements that are true.