sachachua: 2016-06-27 Emacs News

Links from reddit.com/r/emacs, /r/orgmode, Hacker News, planet.emacsen.org, Youtube, the changes to the Emacs NEWS file, and emacs-devel.

Past Emacs News round-ups

The post 2016-06-27 Emacs News appeared first on sacha chua :: living an awesome life.

-1:-- 2016-06-27 Emacs News (Post Sacha Chua)--L0--C0--June 27, 2016 03:15 PM

Michael Fogleman: 2014: A Year of Emacs

My Emacs "birthday" is January 21, 2014. That's right-- I just started using Emacs this year! While it's not quite my one-year birthday, I thought I'd take advantage of the annual review blogging tradition to do a retrospective on my first year in Emacs. Note: much of the following chronology takes place during an extended romp through Asia.

I installed Arch Linux in the Fall of 2013. When using Linux, you often need to edit configuration files. The friend who helped me set me up with Vim. I learned the basics--how to insert, save, and quit--and not much else.

Arch on a bench

In January, I decided it might be fun to give Emacs a try. Here is a picture of Marina Bay in Singapore, taken just before or while I did the Emacs tutorial:

Marina Bay in Singapore, just before or during my Emacs birth

As a life-long user of keyboard shortcuts, I appreciated Emacs' notation for key bindings. The tutorial begins with basic cursor controls; easy enough, but nothing exciting. I could see the line, sentence, and buffer equivalents being useful. But my jaw dropped when it taught me C-t, transpose-chars. I knew that mastering just that command would probably save me a lot of pain and typos, and that was probably the least of what Emacs could do.

While the Emacs tutorial probably isn't meant to take more than an hour or two on the first try, it took me probably twice as long. This was because I made Anki cards. Some people say that Emacs' learning curve is steep. For myself, I've found Emacs' learning curve to be gentle, but (pleasantly) endless. Then again, it was probably making Anki cards that made the learning curve less steep for me.

A variation of this card was probably one of my first Emacs cards:

Anki card: transpose-chars 1

Cloze deletion cards like this one are very convenient for this kind of thing. One reason is that it's easy to add new, related information later:

Anki cards transpose-chars 2

In February, the battery on my little Arch laptop died. I spent most of the month without a laptop, waiting for a new battery to arrive by mail. I borrowed my friend and travel partner's machine to review my Anki cards. Still, I did manage to have a little time to develop my Emacs-fu.

In Hanoi, the hotel room we stayed in had a (virus-ridden) Windows machine. What else could I do but install Emacs? At this time, my Emacs configuration was not that complex, so it wasn't worth adapting. As it turned out, that time with a bare Emacs proved especially helpful for mastering some of the key Emacs concepts that I hadn't quite made use of yet; in particular, I got the hang of using multiple buffers in one session. By the the end of February, my new battery had arrived, and I was back in the Emacs saddle.

In March, I converted my my Emacs configuration to an Org-literate format, and started tracking it with Git. This was really the first project that I used Git for, and it gave me an opportunity to learn Git.

The other highlight of March was starting to learn a little Elisp. I found Robert Chassell's "An Introduction to Programming in Emacs Lisp" especially helpful. Chassell's overview and Org-mode's fantastic documentation helped me to write my first significant piece of Elisp (some org-capture templates).

April started off with a bang when my Emacs configuration was featured in a major motion picture, Tron Legacy. But that wasn't nearly as exciting as making my first major mode, tid-mode.

In late June, Artur Malabarba launched his Emacs blog, Endless Parentheses. One of his early posts was about hungry-delete. I was excited about it, but found that it did not work with transient-mark-mode. Artur encouraged me to write a fix. I was very excited when Nathaniel Flath, the maintainer, agreed to merge my patches into hungry-delete. At about the same time, Artur posted an adapted version of my narrow-or-widen-dwim function to Endless Parentheses. Sacha Chua, Nathaniel Flath, and some other Emacs Lisp hackers also offered suggestions.

At the end of July, I started learning Clojure, and added an initial CIDER configuration. While reading the Clojure Cookbook, I decided to hack out a function that turned the pages. Ryan Neufeld, one of the co-authors, merged my pull request, and a while later, I saw my function mentioned on Planet Clojure.

In October, I purchased a Mac for a Clojure contracting job. (Knowing Emacs helped me get the job!) Adding the :ensure keyword to my use-package declarations made the Emacs configuration portion of my OS X set-up process near-instant and pain-free. It was a really cool feeling to be using the same Emacs configuration on two different machines. By the end of the month, both were running on the newly-released Emacs 24.4.

Once Emacs 24.4 was out, Artur posted an Emacs 25 wishlist to Endless Parentheses. I responded with this tweet:

This tweet indirectly led to the happy occasion of the Emacs Hangouts.

That just about brings us to this post. In retrospect, I can't believe it's been less than a year, or that I've learned so much. Thanks to everyone who has helped me so far on the endless path of mastering Emacs, especially Ben, Eric, Sacha Chua, Artur Malabarba, and Bozhidar Batsov.

Joseph Campbell was a Lisp guru

I want to close this post out with Sacha's tweet about 2015:

Like Sacha, I want to learn how to write and run tests in Emacs. I want to learn more about org-mode's features and possible workflows. More broadly, I'd like to make more posts about Emacs, and find other ways to contribute to the Emacs community. I'd like to learn more about Emacs core development. In terms of contrib, I'd like to help accelerate the release of Magit's "next" branch. Above all, I'm sure I'll keep tweaking my configuration.

-1:-- 2014: A Year of Emacs (Post Michael Fogleman)--L0--C0--June 26, 2016 09:01 AM

Grant Rettke: Making the Repl sing electric

Easily compose music and visual art with Emacs via ReneFroger.

-1:-- Making the Repl sing electric (Post Grant)--L0--C0--June 25, 2016 12:56 PM

Irreal: Vi and Emacs Without the Religion

Chris Patti over at Blind Not Dumb has gathered his courage and written a piece on vi versus Emacs. He approaches the subject, as the title suggests, without the usual religious fervor.

His take, which is hard to argue with, is that the best editor depends on what you are trying to do. If you want to edit text as quickly and efficiently as possible then vi/Vim is probably the editor for you. Be aware, though, that Vim is an editor not an IDE. Patti says that efforts to bolt on IDE-like features rarely end well. Either the extension doesn't work well or it destabilizes Vim.

Emacs, on the other hand, is more of a programming environment that is highly optimized for dealing with text. That means that you can not only edit but do other other—usually but not always—text oriented tasks in the same environment. That gives rise to the familiar—to Emacsers—tendency to move everything possible inside Emacs.

The other advantage of Emacs is that you can customize it to operate in almost any conceivable way. Vim, of course, is also customizable but not nearly to the same extent.

Patti's post is a balanced recounting of the benefits of each editor and may help n00bs trying to decide which one to use to pick the editor best suited for them. I'd bet that almost every Emacs/vi user knows and have used both. Many people start with one and switch to the other for some reason. From my point of view, I love using Emacs because I have adjusted it to enable a nearly frictionless workflow. Still, there are times when only vi/Vim is available so I'm glad to know both.

-1:-- Vi and Emacs Without the Religion (Post jcs)--L0--C0--June 24, 2016 12:07 PM

Ben Simon: Cutting the electronic cord: Setting up a fully paper TODO list tracking strategy

A few months back I started a new TODO list strategy. Rather than having a master task list in the Cloud and creating a daily TODO list in a paper notebook, I maintained both an electronic and paper master task list. The electronic version was tracked in git + emacs + org-mode and the paper version was on index cards.

While git + emacs + org-mode was certainly functional, I never had cause to do anything particular sexy with the setup. In fact, I was hoping this experience would convert me from a fan of subversion to a fan of git, but alas, it only reinforced my appreciation for the simplicity of subversion.

The index cards, on the other hand, were a joy to use. I love that each project is represented by a single card, and that spreading out the cards gives me an overview of all possible tasks to work on:

My daily ritual has become this: brew a cup of tea, spread out the cards, and review where I'm at. I then choose a sequence of tasks to tackle for the day, and stack the cards accordingly:

As I complete tasks, I cross out the line item in question. Switching projects means putting the card on top at the back of the deck, and giving my full attention to newly visible card.

The 10 lines of a 3x5 index card are perfect for keeping tabs on active tasks on a project. If all goes well, project cards become a crossed out mess. No biggie, these cards get recycled (for example: as drawing canvases), and I create a fresh card for the project. The turn over in cards helps keep both physical and metal clutter to a mininum.

I have three recurring activities that I like to fit into my day: scrubbing my work e-mail, scrubbing my personal e-mail and blogging. I wrapped these cards in packing tape, to make them more durable. As a bonus, they serve as tiny whiteboards. These special cards get integrated into the daily stack like any other project.

A few weeks back I splurged on a set of colored pens, and the result is that I can now color code information on the cards with ease.

There's no doubt that part of what I enjoy about this system is that the physical actions on the card reinforce my mental goals. For example, when I sequence the cards for the day, put them in a stack, and attach a mini-binder clip, I'm reinforcing the change-over from thinking big picture to thinking only about a specific task.

So the setup works. Of late, however, the drag of maintaining tasks both electronically and in paper form was getting to me. Yes, updating a task in both places takes just a few seconds, but still, all those seconds add up. So it was time to cut the cord and either go all electronic or all paper. Given the benefits of the paper strategy, I decided to go that route.

Before switching strictly to paper, however, I needed to account for the two main benefits that the electronic system was providing. These include: the always-available always-backed-up nature of storing a text file in git, and the quick linking capabilities offered by emacs + org-mode.

I have pretty strict rule about my task list: never depend on my memory. Ever. If I were to get bonked on the head and suffer from short-term amnesia, I should be able to look at my task list and know exactly what I should work on next. So yeah, I take the integrity of my task list very seriously. Depending on a set of index cards which could be lost, forgotten in a coffee shop, run through the washing machine or destroyed in a freak tea spilling, is a bad idea. In short, I needed a backup strategy.

Turns out, this was an easy conundrum to solve. Every morning I spread out the cards to see what I should work on that day. The solution: I snap a photo of these spread out cards. Problem solved. The photo is backed up in the cloud and accessible everywhere. Yes, it means I have a daily backup and not the every-single-change backup that git provides, but I can live with that. As a bonus, it forces me to not get lazy and skip the planning-overview step of my process. Problem #1, solved.

The second challenge has to do with linking tasks to more information. As I said above, I don't like to depend on my memory. Another manifestation of this principle is that when I create a TODO item I like to link it back to a detailed source of informative. Consider this fake task list:

Project Xin-Gap
...
* Fix user login issue
....

When I documented this as a task, I knew exactly what "login issue" I was referring to. Two weeks later (or one big bonk on the head), I may have no clue. org-mode makes it very easy to link items in the outline to a URL. In the above case, I'd link the text "Fix user login issue" to either a bug report URL or to the URL of the e-mail message where the issue was reported. These links allow my TODO list to remain a sort of tip of the iceberg: the details, like the majority of the iceberg, are hidden from view. But they're there.

So how do I replicate these links in an index card environment? This one took a little longer to figure out. Possible ideas included: NFC stickers and QR Codes. Ultimately, I realized what I needed was to leverage a URL shortener.

For example, if I wanted to link to a bug report over at: http://tracker.xin-gap.com/issue/2993, I could drop that URL into bit.ly and get out something like http://bit.ly/28UgMwR. I could then note 28UgMwR on the index card. To access the bug report, I reverse the process: use the code to form a bit.ly link, which in turn will take me to the bug report. The problem is, manually running these steps for bit.ly was too time consuming and out of the question.

After experimenting a bit with YOURLS, I finally settled on an even simpler approach. I already have a URL shortener setup for my Google Apps account. It was a Google Labs Project back in the day, and somewhat shockingly, it still runs without issue. I access it via: u.ideas2executables.com. I found that I if visit the URL:

http://u.ideas2executables.com/admin.py?hash=True&main=True&action=EDIT&
 url=http%3A%2F%2Ftracker.xin-gap.com%2Fissue%2F2993&
 path=Z23

I'm taken to this page:

I can then write Z23 on my Index Card and hit Add Short Link on the above screen, and I'm all set. Google's URL shotener even comes with a Firefox Protocol Handler which means that I can type: goto:z23 and the browser will expand the URL and visit the correct page.

To streamline this process I created a bookmarklet that does almost all of this for me:

(function() {
  function c() {
    var dict = ['A','B','C','D','E','F','H','J','K','L','M','N',
                'P','Q','R','S','T','U','W','X','Y','Z',
                2,3,4,5,7,8,9];
    return dict[Math.floor(Math.random() * dict.length)];
  }
  window.open('http://u.ideas2executables.com/admin.py?hash=True&main=True&action=EDIT&' +
              'url='+encodeURIComponent(location.href) + '&' +
              'path=' + c() + c() + c(), '_blank');
}())

With one click on my toolbar, I get a browser tab opened to the add-short-link page where there's a random 3 digit code (minus any ambiguous characters like l vs 1 or 0 vs O). I note the 3 digit code on paper, click Add Short Link and close the tab. When I want to visit a task's linked item, I just enter the URL goto:ZZZ where ZZZ is the 3 digit code that I've noted.

Of course, if you want to pursue this, you'll need to setup your own link shortener. And for the life of me, I can't find any indication as to how you'd setup the Google Labs URL shortener I'm making use. But you'll figure it out, I'm sure. The bottom line is, you can combine a URL Shotener and a bookmarklet to make paper linking relatively painless.

With my two challenges solved, I went ahead and cut the electronic cord. It feels strange to depend solely on a stack of cards that I could trivially lose. But my daily photo backup gives me confidence, and the daily handling of cards reminds me that there's real power in working with physical materials.

Besides, it's only a matter of time before I refine my system again. As long as I have TODO items, I'll be on the lookout for a better way of organizing them.

-1:-- Cutting the electronic cord: Setting up a fully paper TODO list tracking strategy (Post Ben Simon (noreply@blogger.com))--L0--C0--June 23, 2016 05:50 PM

Irreal: Literate Programming with Org Mode

Frédérick Giasson, whom I've mentioned before, has a nice post on setting up Org mode for literate programming. Giasson's post is mostly concerned with using literate programming to write Clojure but almost all of his setup is usable for other languages.

It's a testament to the power of Org mode that very little has to be changed from the default settings to have a first class environment. Most of the significant changes that Giasson made involved setting up the environment for Clojure.

One non-trivial change he made for Org was to tangle the code automatically when the file is saved. That keeps the code file up-to-date with the Org source file. To make sure his buffers stay up to date, he calls global-auto-revert-mode so that when the code file is updated, any open buffers for the file are reloaded.

If you're interested in trying out literate programming in an easy way, give Giasson's post a read to see how little effort is required.

-1:-- Literate Programming with Org Mode (Post jcs)--L0--C0--June 23, 2016 12:25 PM

Pragmatic Emacs: Macro Counters

I’ve posted about using keyboard macros to record and play back repetitive tasks. Macros include a counter which lets you insert numerical values that increment each time the macro is called.

For example, go to a new line and start a macro with C-x ( and then hit C-a to move to the start of the line, C-x C-k C-i to insert the macro counter (initially zero) and then RET to go to a new line and C-x ) to stop the recording. Run the macro a few times with C-x e and then just e to repeat the macro and you’ll get something like this:

0
1
2
3
4

The counter starts at zero every time you define a new macro. To set it to another value, use C-x C-k C-c before defining or invoking a macro.

-1:-- Macro Counters (Post Ben Maughan)--L0--C0--June 21, 2016 08:23 PM

Raimon Grau: TIL: Toggle tracing defuns with slime


A nice and quick way to trace/untrace defuns from slime:

(define-key slime-mode-map (kbd "C-c t") 'slime-toggle-trace-fdefinition)
-1:-- TIL: Toggle tracing defuns with slime (Post Raimon Grau (noreply@blogger.com))--L0--C0--June 21, 2016 11:06 AM

sachachua: 2016-06-20 Emacs News

Links from reddit.com/r/emacs, /r/orgmode, Hacker News, planet.emacsen.org, Youtube, the changes to the Emacs NEWS file, and emacs-devel.

Past Emacs News round-ups

The post 2016-06-20 Emacs News appeared first on sacha chua :: living an awesome life.

-1:-- 2016-06-20 Emacs News (Post Sacha Chua)--L0--C0--June 20, 2016 09:44 PM

Marcin Borkowski: Easy Javascript logging

I’ve been doing some JavaScript coding recently. Before you run off screaming, let me tell you that JavaScript is not that bad. Yes, it has C-like, unlispy syntax, but at its heart it is quite a nice language. Today, though, I didn’t want to write about JS in general, but about one small detail. While debugging JavaScript code, it is often useful to sprinkle console.log statements in your code. Also, if you want to check the value of some more complicated variable, you need to JSON.stringify it first. Of course, writing repetitive pieces of code is not what an Emacs user would like to do, so I hacked this:
-1:-- Easy Javascript logging (Post)--L0--C0--June 20, 2016 07:52 PM

Endless Parentheses: Restarting the compilation buffer in comint-mode

After last week's post, Clément Pit-Claudel informed us of an alternative method for providing input to compilations. I have no idea how I’d never learned about that before, but I figure that other people might be in the same situation so it’s worth a post. Have a look at the Update at the end of the post.

I’ve also updated an older post accordingly: Better compile command.

Comment on this.

-1:-- Restarting the compilation buffer in comint-mode (Post)--L0--C0--June 17, 2016 12:00 AM

Chris Wellons: Elfeed, cURL, and You

This morning I pushed out an important update to Elfeed, my web feed reader for Emacs. The update should be available in MELPA by the time you read this. Elfeed now has support for fetching feeds using a cURL through a curl inferior process. You’ll need the program in your PATH or configured through elfeed-curl-program-name.

I’ve been using it for a couple of days now, but, while I work out the remaining kinks, it’s disabled by default. So in addition to having cURL installed, you’ll need to set elfeed-use-curl to non-nil. Sometime soon it will be enabled by default whenever cURL is available. The original url-retrieve fetcher will remain in place for time time being. However, cURL may become a requirement someday.

Fetching with a curl inferior process has some huge advantages.

It’s much faster

The most obvious change is that you should experience a huge speedup on updates and better responsiveness during updates after the first cURL run. There are important two reasons:

Asynchronous DNS and TCP: Emacs 24 and earlier performs DNS queries synchronously even for asynchronous network processes. This is being fixed on some platforms (including Linux) in Emacs 25, but now we don’t have to wait.

On Windows it’s even worse: the TCP connection is also established synchronously. This is especially bad when fetching relatively small items such as feeds, because the DNS look-up and TCP handshake dominate the overall fetch time. It essentially makes the whole process synchronous.

Conditional GET: HTTP has two mechanism to avoid transmitting information that a client has previously fetched. One is the Last-Modified header delivered by the server with the content. When querying again later, the client echos the date back like a token in the If-Modified-Since header.

The second is the “entity tag,” an arbitrary server-selected token associated with each version of the content. The server delivers it along with the content in the ETag header, and the client hands it back later in the If-None-Match header, sort of like a cookie.

This is highly valuable for feeds because, unless the feed is particularly active, most of the time the feed hasn’t been updated since the last query. This avoids sending anything other hand a handful of headers each way. In Elfeed’s case, it means it doesn’t have to parse the same XML over and over again.

Both of these being outside of cURL’s scope, Elfeed has to manage conditional GET itself. I had no control over the HTTP headers until now, so I couldn’t take advantage of it. Emacs’ url-retrieve function allows for sending custom headers through dynamically binding url-request-extra-headers, but this isn’t available when calling url-queue-retrieve since the request itself is created asynchronously.

Both the ETag and Last-Modified values are stored in the database and persist across sessions. This is the reason the full speedup isn’t realized until the second fetch. The initial cURL fetch doesn’t have these values.

Fewer bugs

As mentioned previously, Emacs has a built-in URL retrieval library called url. The central function is url-retrieve which asynchronously fetches the content at an arbitrary URL (usually HTTP) and delivers the buffer and status to a callback when it’s ready. There’s also a queue front-end for it, url-queue-retrieve which limits the number of parallel connections. Elfeed hands this function a pile of feed URLs all at once and it fetches them N at a time.

Unfortunately both these functions are incredibly buggy. It’s been a thorn in my side for years.

Here’s what the interface looks like for both:

(url-retrieve URL CALLBACK &optional CBARGS SILENT INHIBIT-COOKIES)

It takes a URL and a callback. Seeing this, the sane, unsurprising expectation is the callback will be invoked exactly once for time url-retrieve was called. In any case where the request fails, it should report it through the callback. This is not the case. The callback may be invoked any number of times, including zero.

In this example, suppose you have a webserver on port 8080 that will return an HTTP 404 at the given URL. Below, I fire off 10 asynchronous requests in a row.

(defvar results ())
(dotimes (i 10)
  (url-retrieve "http://127.0.0.1:8081/404"
                (lambda (status) (push (cons i status) results))))

What would you guess is the length of results? It’s initially 0 before any requests complete and over time (a very short time) I would expect this to top out at 10. On Emacs 24, here’s the real answer:

(length results)
;; => 46

The same error is reported multiple times to the callback. At least the pattern is obvious.

(cl-count 0 results :key #'car)
;; => 9
(cl-count 1 results :key #'car)
;; => 8
(cl-count 2 results :key #'car)
;; => 7

(cl-count 9 results :key #'car)
;; => 1

Here’s another one, this time to the non-existent foo.example. The DNS query should never resolve.

(setf results ())
(dotimes (i 10)
  (url-retrieve "http://foo.example/"
                (lambda (status) (push (cons i status) results))))

What’s the length of results? This time it’s zero. Remember how DNS is synchronous? Because of this, DNS failures are reported synchronously as a signaled error. This gets a lot worse with url-queue-retrieve. Since the request is put off until later, DNS doesn’t fail until later, and you get neither a callback nor an error signal. This also puts the queue in a bad state and necessitated elfeed-unjam for manually clear it. This one should get fixed in Emacs 25 when DNS is asynchronous.

This last one assumes you don’t have anything listening on port 57432 (pulled out of nowhere) so that the connection fails.

(setf results ())
(dotimes (i 10)
  (url-retrieve "http://127.0.0.1:57432/"
                (lambda (status) (push (cons i status) results))))

On Linux, we finally get the sane result of 10. However, on Windows, it’s zero. The synchronous TCP connection will fail, signaling an error just like DNS failures. Not only is it broken, it’s broken in different ways on different platforms.

There are many more cases of callback weirdness which depend on the connection and HTTP session being in various states when thing go awry. These were just the easiest to demonstrate. By using cURL, I get to bypass this mess.

No more GnuTLS issues

At compile time, Emacs can optionally be linked against GnuTLS, giving it robust TLS support so long as the shared library is available. url-retrieve uses this for fetching HTTPS content. Unfortunately, this library is noisy and will occasionally echo non-informational messages in the minibuffer and in *Messages* that cannot be suppressed.

When not linked against GnuTLS, Emacs will instead run the GnuTLS command line program as an inferior process, just like Elfeed now does with cURL. Unfortunately this interface is very slow and frequently fails, basically preventing Elfeed from fetching HTTPS feeds. I suspect it’s in part due to an improper coding-system-for-read.

cURL handles all the TLS negotation itself, so both these problems disappear. The compile-time configuration doesn’t matter.

Windows is now supported

Emacs’ Windows networking code is so unstable, even in Emacs 25, that I couldn’t make any practical use of Elfeed on that platform. Even the Cygwin emacs-w32 version couldn’t cut it. It hard crashes Emacs every time I’ve tried to fetch feeds. Fortunately the inferior process code is a whole lot more stable, meaning fetching with cURL works great. As of today, you can now use Elfeed on Windows. The biggest obstable is getting cURL installed and configured.

Interface changes

With cURL, obviously the values of url-queue-timeout and url-queue-parallel-processes no longer have any meaning to Elfeed. If you set these for yourself, you should instead call the functions elfeed-set-timeout and elfeed-set-max-connections, which will do the appropriate thing depending on the value of elfeed-use-curl. Each also comes with a getter so you can query the current value.

The deprecated elfeed-max-connections has been removed.

Feed objects now have meta tags :etag, :last-modified, and :canonical-url. The latter can identify feeds that have been moved, though it needs a real UI.

See any bugs?

If you use Elfeed, grab the current update and give the cURL fetcher a shot. Please open a ticket if you find problems. Be sure to report your Emacs version, operating system, and cURL version.

As of this writing there’s just one thing missing compared to url-queue: connection reuse. cURL supports it, so I just need to code it up.

-1:-- Elfeed, cURL, and You (Post)--L0--C0--June 16, 2016 06:22 PM

Ben Simon: Got Data? Adventures in Virtual Crystal Ball Creation

There's two ways to look at this recent Programming Praxis exercise: implementing a beginner level statistics function or creating a magical crystal ball that can predict past, present and future! I chose to approach this problem with the mindset of the latter. Let's make some magic!

The algorithm we're tackling is linear regression. I managed to skip statistics in college (what a shame!), so I don't recall ever being formally taught this technique. Very roughly, if you have the right set of data, you can plot a line through it. You can then use this line to predict values not in the data set.

The exercise gave us this tiny, manufactured, data set:

x    y
60   3.1
61   3.6
62   3.8
63   4.0
65   4.1

With linear regression you can answer questions like: what will be the associated value for say, 64, 66 or 1024 be? Here's my implementation in action:

A few words about the screenshot above. You'll notice that I'm converting my data from a simple list to a generator. A generator in this case is a function that will return a single element in the data set, and returns '() when all the data has been exhausted. I chose to use a generator over a simple list because I wanted to allow this solution to scale to large data sets.

Below you'll see a data set that's stored in a file and leverages a file based generator to access its contents. So far, I haven't throw a large data set at this program, but I believe it should scale without issue.

The call to make-crystal-ball performs the linear-regression and returns back a function that when provided x returns a guess prediction for y. What? I'm trying to have a bit of fun here.

Looking around on web I found this example that uses a small, but real data set. In this case, it compares High School and College GPA. Using linear-regression we're able to predict how a High School student with a 2.7, 3.0 or 3.5 GPA is going to do in college. Here's the code:

;;  http://onlinestatbook.com/2/regression/intro.html
;;  http://onlinestatbook.com/2/case_studies/sat.html
(define (gpa-sat-test)
  (define data (file->generator "gpa-sat-data.scm"
                                (lambda (high-gpa math-sat verb-sat comp-gpa univ-gpa)
                                  (list high-gpa univ-gpa))))
  (let ((ball (make-crystal-ball data)))
    (show "2.7 => " (ball 2.7))
    (show "3.0 => " (ball 3))
    (show "3.5 => " (ball 3.5))))

And the answer is: 2.91, 3.12 and 3.45 respectively. So yeah, that's good news if you were dragging in High School, you're GPA should climb a bit. But High School overachievers should beware, your GPA is most likely to dip. D'oh.

Below is my implementation of this solution. You can also find it on github. As usual I find myself preaching the benefits of Scheme. The code below was written on my Galaxy Note 5 using Termux, emacs and Tinyscheme. With relative ease I was able to implement a generator framework that works for both lists and data files. I'm also able to leverage the Scheme reader so that the data file format is trivial to operate on. Finally, I wrote a generic sigma function that walks through my data set once, but performs all the various summations I'll need to calculate the necessary values. In other words, I feel like I've got an elegant solution using little more than lists, lambda functions and sexprs. It's beautiful and should be memory efficient.

Here's the code:

;; https://programmingpraxis.com/2016/06/10/linear-regression/

(define (show . args)
  (for-each (lambda (arg)
       (display arg)
       (display " "))
     args)
  (newline))

(define (as-list x)
  (if (list? x) x (list x)))

(define (g list index)
  (list-ref list index))

(define (make-list n seed)
  (if (= n 0) '()
      (cons seed (make-list (- n 1) seed))))

(define (list->generator lst)
  (let ((remaining lst))
    (lambda ()
      (cond ((null? remaining) '())
     (else
      (let ((x (car remaining)))
        (set! remaining (cdr remaining))
        x))))))

(define (file->generator path scrubber)
  (let ((port (open-input-file path)))
    (lambda ()
      (let ((next (read port)))
        (if (eof-object? next) '() (apply scrubber next))))))

(define (sigma generator . fns)
  (define (update fns sums data)
    (let loop ((fns fns)
        (sums sums)
        (results '()))
      (cond ((null? fns) (reverse results))
     (else
      (let ((fn (car fns))
     (a  (car sums)))
        (loop (cdr fns)
       (cdr sums)
       (cons (+ a (apply fn (as-list data)))
      results)))))))

    (let loop ((data (generator))
        (sums (make-list (length fns) 0)))
    (if (null? data) sums
 (loop (generator)
       (update fns sums data)))))

;; Magic happens here:
;; m = (n × Σxy − Σx × Σy) ÷ (n × Σx2 − (Σx)2)
;; b = (Σy − m × Σx) ÷ n
(define (linear-regression data)
  (let ((sums (sigma data
        (lambda (x y) (* x y))
        (lambda (x y) x)
        (lambda (x y) y)
        (lambda (x y) (* x x))
        (lambda (x y) 1))))
    (let* ((Sxy (g sums 0))
    (Sx  (g sums 1))
    (Sy  (g sums 2))
    (Sxx (g sums 3))
    (n   (g sums 4)))
      (let* ((m (/ (- (* n Sxy) (* Sx Sy))
     (- (* n Sxx) (* Sx Sx))))
      (b (/ (- Sy (* m Sx)) n)))
 (cons m b)))))

(define (make-crystal-ball data)
  (let* ((lr (linear-regression data))
         (m  (car lr))
         (b  (cdr lr)))
    (lambda (x)
      (+ (* m x) b))))

;; Playtime
(define (test)
  (define data (list->generator '((60   3.1)
      (61   3.6)
      (62   3.8)
      (63   4.0)
      (65   4.1))))
  (let ((ball (make-crystal-ball data)))
    (show (ball 64))
    (show (ball 66))
    (show (ball 1024))))

;; From:
;;  http://onlinestatbook.com/2/regression/intro.html
;;  http://onlinestatbook.com/2/case_studies/sat.html
(define (gpa-sat-test)
  (define data (file->generator "gpa-sat-data.scm"
                                (lambda (high-gpa math-sat verb-sat comp-gpa univ-gpa)
                                  (list high-gpa univ-gpa))))
  (let ((ball (make-crystal-ball data)))
    (show "2.7 => " (ball 2.7))
    (show "3.0 => " (ball 3))
    (show "3.5 => " (ball 3.5))))

(gpa-sat-test)

-1:-- Got Data? Adventures in Virtual Crystal Ball Creation (Post Ben Simon (noreply@blogger.com))--L0--C0--June 16, 2016 04:35 PM

Marcin Borkowski: Displaying pdfs on the right

Some time ago I decided that I’ll give PDF tools a shot. Wow. Just wow. Never going back to Evince. (Well, almost never – Evince does have one or two options not present in PDF Tools, but I hardly ever use them). If you are an Emacs user on GNU+Linux, I strongly advice you to try PDF Tools out. Especially if you do a lot of LaTeX work in AUCTeX (like I do) and/or if you need to read or create PDF annotations (highlighting and/or notes) – it’s really great. (I did once experience some problems with said annotations, though.) I was a bit afraid about the efficiency – but man, it’s really fast! A great piece of work indeed. I have a few problems with it, though.
-1:-- Displaying pdfs on the right (Post)--L0--C0--June 13, 2016 08:14 PM

Grant Rettke: Prettifying Org-Mode Code Blocks for Presentations

Org-Mode code blocks are verbose and lovable for literate programming. Rasmus wants to use the raw literate document for a presentation though, so that verbosity won’t do. He explains here how to prettify code blocks. The value-add here is that he doesn’t have to weave (export) the document for it to look great in the presentation; it already does directly in the Emacs buffer.

-1:-- Prettifying Org-Mode Code Blocks for Presentations (Post Grant)--L0--C0--June 13, 2016 02:19 AM

Endless Parentheses: Provide input to the compilation buffer

The Emacs compile command is a severely underused tool. It allows you to run any build tool under the sun and provides error-highlighting and jump-to-error functionality for dozens of programming languages, but many an Emacser is still in the habit of switching to a terminal in order to run make, lein test, or bundle exec. It does have one limitation, though. The compilation buffer is not a real shell, so if the command being run asks for user input (even a simple y/n confirmation) there’s no way to provide it.

(Since posting this, I’ve learned that part of it is mildly futile. Read the update below for more information.)

Fortunately, that’s not hard to fix. The snippet below defines two commands. The first one prompts you for input and then sends it to the underlying terminal followed by a newline, designed for use with prompts and momentary REPLs. The second is a command that simply sends the key that was pressed to invoke it, designed for easily replying to y/n questions or quickly quitting REPLs with C-d or C-j.

(defun endless/send-input (input &optional nl)
  "Send INPUT to the current process.
Interactively also sends a terminating newline."
  (interactive "MInput: \nd")
  (let ((string (concat input (if nl "\n"))))
    ;; This is just for visual feedback.
    (let ((inhibit-read-only t))
      (insert-before-markers string))
    ;; This is the important part.
    (process-send-string
     (get-buffer-process (current-buffer))
     string)))

(defun endless/send-self ()
  "Send the pressed key to the current process."
  (interactive)
  (endless/send-input
   (apply #'string
          (append (this-command-keys-vector) nil))))

(dolist (key '("\C-d" "\C-j" "y" "n"))
  (define-key compilation-mode-map key
    #'endless/send-self))

This is something I’ve run into for years, but I finally decided to fix it because it meant I couldn’t run Ruby’s rspec in the compilation buffer if my code contained a binding.pry (which spawns a REPL). Now I can actually interact with this REPL via C-c i or just quickly get rid of it with C-d. If you run into the same situation, you should also set the following option in your .pryrc file.

Pry.config.pager = false if ENV["INSIDE_EMACS"]

Update 17 Jun 2016

As Clément points out in the comments, you can run compilation commands in comint-mode by providing the C-u prefix to M-x compile. You still have all of the usual compilation-mode features (like next/previous-error), with the additional benefit that the buffer accepts input like a regular shell does.

The only caveat is that, since the buffer is modifiable, you lose some convenience keys like q to quit the buffer or g to recompile, so you’ll need to bind them somewhere else

(define-key compilation-minor-mode-map (kbd "<f5>")
  #'recompile)
(define-key compilation-minor-mode-map (kbd "<f9>")
  #'quit-window)

(define-key compilation-shell-minor-mode-map (kbd "<f5>")
  #'recompile)
(define-key compilation-shell-minor-mode-map (kbd "<f9>")
  #'quit-window)

I still like it that the previous solution gives me quick access to C-d and y/n for those cases when I forget to use comint-mode, but the solution I had for inputting long strings is definitely redundant now. Instead, we can have a key that restarts the current compilation in comint-mode.

(require 'cl-lib)
(defun endless/toggle-comint-compilation ()
  "Restart compilation with (or without) `comint-mode'."
  (interactive)
  (cl-callf (lambda (mode) (if (eq mode t) nil t))
      (elt compilation-arguments 1))
  (recompile))

(define-key compilation-mode-map (kbd "C-c i")
  #'endless/toggle-comint-compilation)
(define-key compilation-minor-mode-map (kbd "C-c i")
  #'endless/toggle-comint-compilation)
(define-key compilation-shell-minor-mode-map (kbd "C-c i")
  #'endless/toggle-comint-compilation)

Comment on this.

-1:-- Provide input to the compilation buffer (Post)--L0--C0--June 09, 2016 12:00 AM

Pragmatic Emacs: Insert file name

Here is a simple function from the emacs wiki to insert the name of a file into the current buffer. The convenient thing is that it uses the normal find-file prompt with whatever your completion setting is, so it works very easily. I bind it to C-c b i and as the documentation says, by default it inserts a relative path but called with a prefix (C-u C-c b i) it inserts the full path.

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; insert file name at point                                              ;;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; https://www.emacswiki.org/emacs/InsertFileName
(defun bjm/insert-file-name (filename &optional args)
  "Insert name of file FILENAME into buffer after point.

  Prefixed with \\[universal-argument], expand the file name to
  its fully canocalized path.  See `expand-file-name'.

  Prefixed with \\[negative-argument], use relative path to file
  name from current directory, `default-directory'.  See
  `file-relative-name'.

  The default with no prefix is to insert the file name exactly as
  it appears in the minibuffer prompt."
  ;; Based on insert-file in Emacs -- ashawley 20080926
  (interactive "*fInsert file name: \nP")
  (cond ((eq '- args)
         (insert (expand-file-name filename)))
        ((not (null args))
         (insert (filename)))
        (t
         (insert (file-relative-name filename)))))

;; bind it
(global-set-key (kbd "C-c b i") 'bjm/insert-file-name)
-1:-- Insert file name (Post Ben Maughan)--L0--C0--June 08, 2016 08:43 PM

John Stevenson: Using Github Gists From Spacemacs

Github Gists are really useful when you want to share a piece of code or configuration without setting up a version control project. Rather than copy & paste into a Github Gists website, you can create a Gist from any Spacemacs buffer with a single command.

All you need is to add the github layer to your ~/.spacemacs configuration file and reload your configuration M-m f e R or restart Spacemacs. Lets see just how easy it is to use Gists with Spacemacs.

You can also use gist.el with your own Emacs configuration

Connecting to your Github account

When you first run any of the Gist or Github commands you will be prompted for your username, password and 2Factor code. The Gist.el code will create a personal access token on your Github account, avoiding the need to prompt for your Github login details each time.

If you are prompted to enter your personal access token in Emacs, then visit your Github profile page and view the personal acccess tokens section. Edit the token named git.el and regenerated the token. This will take you back to the personal access tokens page and display the new token for git.el. Copy this token into the [github] section of your ~/.gitconfig as follows

1
2
3
[github]
user = jr0cket
oauth-token = thisishweretherealtokenshouldbepasted

If git.el adds a password line to the [github] section of your ~/.gitconfig you should remove that password line. These Github actions only require your username and token.

Creating a Gist from Spacemacs

The current buffer can be copied into a Github Gist using the command M-x gist-buffer.

Gist - create a Gist from the current buffer

You can also create a gist just from a selected region of the buffer. First select the region using C-SPC and run the command M-x gist-region.

If this is the first time using Github from Spacemacs, you will be prompted for your Github username & password. If you have already used Github from Spacemacs, then your account details will have been saved so you do not need to enter them each time.

Keyboard shortcuts

  • M-m g g b : create a public gist from the current Spacemacs buffer
  • M-m g g B : create a private gist from the current Spacemacs buffer
  • M-m g g r : create a public gist from the highlighted region
  • M-m g g R : create a private gist from the highlighted region
  • M-m g g l : list all gists on your github account

Replace M-m with SPC if you are using Spacemacs evil mode

Updating a Gist

When you create a Gist from a buffer there is no direct link between your buffer and the Gist. So if you make changes to your buffer you want to share, you can generate a new gist using M-x gist-buffer & delete the original one (see listing & managing gists below).

Alternatively, once you have created a Gist, you can open that Gist in a buffer and make changes. When you save your changes in the Gist buffer, C-x C-s, the gist on gist.github.com is updated.

Listing & managing Gists

Use the command M-x gist-list or keybinding M-m g g l to show a list of your current Gists.

Spacemacs - Gist list

In the buffer containing the list of your gists, you can use the following commands

  • RETURN : opens the gist in a new buffer
  • g : reload the gist list from server
  • e : edit the gist description, so you know what this gist is about
  • k : delete current gist
  • b : opens the gist in the current web browser
  • y : show current gist url & copies it into the clipboard
  • * : star gist (stars do not show in gist list, only when browsing them on github)
  • ^ : unstar gist
  • f : fork gist - create a copy of your gist on gist.github.com
  • + : add a file to the current gist, creating an additional snippet on the gist
  • - : remove a file from the current gist

Creating Gists from files

If you open a dired buffer you can make gists from marked files, m, by pressing @. This will make a public gist out of marked files (or if you use with a prefix, it will make private gists)

Gist - create a gist from the marked files in dired

Summary

Its really easy to share code and configuration with Github Gists. Its even easier when you use Spacemacs) to create and manages gists for you. Have fun sharing your code & configurations with others via gists.

Thank you.
@jr0cket

-1:-- Using Github Gists From Spacemacs (Post)--L0--C0--June 06, 2016 01:16 AM

(or emacs: Set an Emacs variable with double completion

I'd like to show off a certain Elisp productivity booster that I've had in an unfinished state for a while and finished just today.

A large part of tweaking Elisp is simply setting variables. The new command, counsel-set-variable, allows to set them quite a bit faster.

Completion stage 1:

First of all, you get completion for all variables that you have defined:

counsel-set-variable-1.png

Completion stage 2:

Once a symbol is selected, the code checks whether the symbol is a defcustom with type 'boolean or 'radio. Since then it is possible to offer all values that the symbol is allowed to become for completion.

For example, here's a typical 'radio-type definition:

(defcustom avy-style 'at-full
  "The default method of displaying the overlays.
Use `avy-styles-alist' to customize this per-command."
  :type '(choice
          (const :tag "Pre" pre)
          (const :tag "At" at)
          (const :tag "At Full" at-full)
          (const :tag "Post" post)
          (const :tag "De Bruijn" de-bruijn)))

And here's a completion screen offered for this variable:

counsel-set-variable-2.png

It is worth noting that the current value of the variable is pre-selected, to give a nice reference point for the new setting.

In case the symbol isn't a boolean or a radio

Then you get a completion session similar to M-x read-expression, but with the initial contents already filled in. For example:

counsel-set-variable-3.png

The read-expression part combines well with this setting in my config:

(defun conditionally-enable-lispy ()
  (when (eq this-command 'eval-expression)
    (lispy-mode 1)))

(add-hook
 'minibuffer-setup-hook
 'conditionally-enable-lispy)

Here's a series of commands using lispy-mode that I would typically use for the screenshot above, to set ivy-re-builders-alist to a new value:

  1. C-f (forward-char) to get into special.
  2. -e (lispy-ace-subword) to mark the plus part of the code.
  3. C-d (lispy-delete) to delete the active region.

After the C-f -e C-d chain of bindings, the minibuffer contents become:

(setq ivy-re-builders-alist '((t . ivy--regex-|)))

I press C-M-i (completion-at-point) to get completion for all symbols that start with ivy--regex-. Since I have ivy-mode on, C-M-i starts a recursive completion session. I highly recommend adding these settings to your config:

;; Allow to read from minibuffer while in minibuffer.
(setq enable-recursive-minibuffers t)

;; Show the minibuffer depth (when larger than 1)
(minibuffer-depth-indicate-mode 1)

Finally, I would e.g. select e.g. ivy--regex-fuzzy and RET RET to finalize the eval. The first RET exits from completion-at-point and the second RET exits from counsel-set-variable.

Outro

I think this command, especially the newly added read-expression part, is quite a bit faster than what I did before. That is switching to *scratch* and typing in the setq manually and evaluating with C-j. Here's my binding for the new command:

(global-set-key (kbd "<f2> j") 'counsel-set-variable)

It's not very mnemonic, but it's really fast. Just a suggestion, in case you don't know where to bind it. Happy hacking!

-1:-- Set an Emacs variable with double completion (Post)--L0--C0--June 05, 2016 10:00 PM

Phil Hagelberg: in which four cards make a gang

I really enjoy programming with my kids. For me helping them learn how computers work is more about training them in logical thinking, creativity, and problem solving than it is about teaching them to accomplish specific tasks with software. My goal isn't to help them land lucrative programming jobs when they get older, but to expand their horizons with skills they can use in any kind of profession.

Over the past couple years, we've gotten the chance to create a few different games in several different styles. My kids were recently gifted a deck of Gang of Four cards and were playing it nearly every day. This made it easy for them to get on board with the idea of turning it into a computer game when I suggested it. Adapting a card game turns out to be a great way for beginners to learn about programming since you start with a very clear goal, and it is easily broken up into natural steps.

I've tried to document here the topics that came up as we progressed through building one of these games, but I also think they learned a lot from just seeing how the program comes together bit by bit and by debugging when we ran into problems. While my kids can type and do often work independently, I've found that the best approach usually has me at the keyboard guiding through the steps in a kind of socratic style of questioning. There are certainly times when I'll cheat and simply cut off an avenue of thought that I feel will be unproductive or frustrating, but ideally I try to stick with asking questions and typing.

Right now our weapon of choice is the Lua programming language due to its relentless simplicity and the availability of the wonderful LÖVE game framework. While we've done graphical games with LÖVE, this one makes more sense to start out as a plain Lua game that uses console input and output. The complete source code is available on GitLab.

make_deck = function()
  local deck = {}
  for i=1,10 do
    for _,c in pairs({0.1, 0.2, 0.3}) do
      table.insert(deck, i + c) -- two of each
      table.insert(deck, i + c)
    end
  end
  table.insert(deck, 1.4) -- special 1+ card
  table.insert(deck, 11.1) -- two phoenixes
  table.insert(deck, 11.2)
  table.insert(deck, 12.3) -- dragon
  return lume.shuffle(deck)
end

Constructing the deck leads to some good opening questions of how to represent cards and what hands should look like. After seeing it deal out the exact same hands a few times, it also offered an opportunity to talk about what it means for a process to be deterministic and why you need to seed your random number generator to make the game fun.

Once you add a loop which prints your hand and asks you which cards you want to play, the game is playable (in a "hot-seat" multiplayer style) as long as you already know the rules. Of course you will want to add checks for legal plays, but the minimum required for a playable prototype is here. Working in small discrete steps like this really helps with kids because reaching each milestone feels like a big win.

After we added in functions to enforce the rules, we began to add computer players. Writing AI may seem like a really advanced topic, but for a card game like this it's pretty straightforward. Granted our computer players don't always make the most strategic decisions, but they get by pretty well with a basic strategy of always trying to get rid of their lowest-ranked cards. Here again we found a way to break it into smaller steps—first the computer players are added to the rounds but only know how to pass, then they learn to play during single-card rounds, then they learn doubles and triples, etc. Writing computer players also led to a discussion about re-usable functions; many of the things we needed we had just implemented to determine whether a given hand was legal.

writing pong on the porch

In Gang of Four, the most powerful hand is a "gang", a set of four or more which can always beat any non-gang hand. It's not unusual for a gang of four to appear, but a gang of five is pretty rare. We've only seen a gang of six once, and while it is theoretically possible to get a gang of seven (there is only one way to make a gang of seven since most numbers only have six cards) the odds are astronomically against it. Still, the possibility that a gang of seven could exist kept my kids' fascination.

But now that we have a functioning simulation of the game, we can run through a simulation of dealing out hands over and over again and check for gangs. We found that running a repeated dealing simulation could sometimes find a gang of seven after as little as 3,000 games, but sometimes it would take up to 120,000. Not only does this give a good opportunity to talk about permutations, histograms, and simple optimizations[1], but it also serves as a great demonstration of using code to satisfy your own curiosity.

gang of four cards

The grand finale of this enterprise involved the realization that the ComputerCraft mod for MineCraft provides a way to run Lua code in-game which should be compatible with what we just wrote. While this wasn't quite seamless,[2] seeing our code that we'd so far only run in a regular terminal running on an in-game machine was quite a thrill.

We still have a few steps further open to us from here. Implementing multiplayer over a simple socket interface is not too difficult[3], but it forces you to revisit the assumptions you have so far about input and output. Another fun direction would be to add a GUI for the game using LÖVE's graphical capabilities. Or implementing a user interface in a MineTest Lua mod where the cards are individual items. The point is being able to just explore whatever direction the kids interest takes them.


[1] The only possible gang of seven is a hand of seven ones, so you can quickly check for a gang of seven on a sorted hand by seeing if the seventh card is a 1. There are lots of other optimizations you can make, but they were excited to see significant speed boosts from replacing the naive check with this one. But there was also a complexity cost if we wanted to keep the old code around for compatibility with detecting gangs of 4, 5, or 6.

[2] Getting the code from the real world into a ComputerCraft machine is a major pain in the neck, and the displays in ComputerCraft are so ridiculously low-resolution that we had to add paging or you would miss important details. Adding multiplayer to the ComputerCraft version with its networking API would be a good next step here.

[3] We actually already implemented multiplayer, but this post is getting long enough as it is.

-1:-- in which four cards make a gang (Post Phil Hagelberg)--L0--C0--June 05, 2016 03:13 PM

Chen Bin (redguardtoo): New git-timemachine UI based on ivy-mode

When I use git-timemachine, I prefer start from my selected revision instead of HEAD.

Here is my code based on ivy-mode,

(defun my-git-timemachine-show-selected-revision ()
  "Show last (current) revision of file."
  (interactive)
  (let (collection)
    (setq collection
          (mapcar (lambda (rev)
                    ;; re-shape list for the ivy-read
                    (cons (concat (substring (nth 0 rev) 0 7) "|" (nth 5 rev) "|" (nth 6 rev)) rev))
                  (git-timemachine--revisions)))
    (ivy-read "commits:"
              collection
              :action (lambda (rev)
                        (git-timemachine-show-revision rev)))))

(defun my-git-timemachine ()
  "Open git snapshot with the selected version.  Based on ivy-mode."
  (interactive)
  (unless (featurep 'git-timemachine)
    (require 'git-timemachine))
  (git-timemachine--start #'my-git-timemachine-show-selected-revision))

Screenshot after M-x my-git-timemachine,

my-git-timemachine-nq8.png

-1:-- New git-timemachine UI based on ivy-mode (Post Chen Bin)--L0--C0--June 05, 2016 02:32 PM

Alex Schroeder: nginx as a caching proxy

Remember the fiasco at the end of 2014 and the beginning of 2015, when Emacs Wiki was down as I tried to switch to Fast CGI, and it kept staying down as I tried to use mod_perl, when finally Nic Ferrier stepped in and started paying for new hosting. He used nginx as a caching proxy for Apache, which continued to call the wiki as a simply CGI script.

The comments reveal quite a bit of caching issues, though. The problem was that nginx seemed to ignore that two pages with an identical URL but for different language preferences could be different. This in turn meant that a German visitor to a page not in the cache would put the page in the cache but with German menus at the bottom, for example (“Diese Seite bearbeiten” instead of “Edit this page”). And if the next visitor to the same page didn’t understand German, this was annoying.

I thought the answer was that Apache should send the “Vary” header telling the cache that accept-language should be part of the cache key. See this discussion of the Vary header, for example: Understanding the HTTP Vary Header and Caching Proxies. But it just didn’t work.

Recently, I suddenly realized that perhaps nginx doesn’t care about the Vary header Apache is sending. I learned about the proxy_header_key setting in the nginx caching guide. As you can see, the default is inadequate! $scheme$proxy_host$request_uri is not what I’m expecting. I needed to add http_accept_language to that key:

# proxy cache key is $scheme$proxy_host$request_uri by default
proxy_cache_key $scheme$proxy_host$request_uri$http_accept_language;

I get the feeling that things are working, right now! If so, it only took 17½ month to find the solution. I’m writing it down in case anybody else stumbles over this problem (and in case I need to remember in the future).

Perhaps the problem is also related to how I compute Etags for my pages: the wiki doesn’t take accept-language into account when computing Etags. Now that I think about it, perhaps that would also have helped solve the problem?

Tags:

-1:-- nginx as a caching proxy (Post)--L0--C0--May 28, 2016 03:19 PM

Flickr tag 'emacs': The ultrabook running emacs

david49100 posted a photo:

The ultrabook running emacs

-1:-- The ultrabook running emacs (Post david49100 (nobody@flickr.com))--L0--C0--May 28, 2016 12:12 PM

Jorgen Schäfer: Circe 2.3 released

We just released version 2.3 of Circe, a Client for IRC in Emacs.

The package is available from github, MELPA stable and MELPA unstable. The latter will track further development changes, so use at your own risk.

Changes

  • Circe (Lui) now has a track bar. Use (enable-lui-track-bar) to get a bar where you stopped reading when you did hide a buffer.
  • Buffers are now by default limited to 100k, as large buffers cause unreasonable slowdown in Emacs.
  • Autopaste now defaults to ix.io and also knows about ptpb.pw.
  • A number of faces have been updated to be nicer to the eye.
  • Improve compatibility with the Slack IRC gateway.
  • Lots of bug fixes.

Thanks to defanor, Janis Dzerins and Vasilij Schneidermann for their contributions.

-1:-- Circe 2.3 released (Post Jorgen Schäfer (noreply@blogger.com))--L0--C0--May 27, 2016 01:24 PM

Raimon Grau: keysnail plugin to navigate relations

I'm using keysnail as my emacsy browser. It's heavier than conkeror, but I'd say it's also more compatible with common plugins (adblockers, cookiemanagers, RES,...)

A feature not present in keysnail (until now) was the ability to navigate through a hierarchy of a web without reaching to the actual link (if any).

So I wrote this super simple keysnail-navigate-relations plugin that provides 3 commands (go-next, go-prev, go-up) and 3 keybindings (]], [[, ^) so you can navigate much more easily through structured webs.

Possible uses for it are:

As always, feedback is more than welcome.
-1:-- keysnail plugin to navigate relations (Post Raimon Grau (noreply@blogger.com))--L0--C0--May 26, 2016 03:58 PM

Chen Bin (redguardtoo): Complete line with ivy-mode

Complete current line by git grep and ivy-mode.

(defun counsel-escape (keyword)
  (setq keyword (replace-regexp-in-string "\\$" "\\\\\$" keyword))
  (replace-regexp-in-string "\"" "\\\\\"" keyword))

(defun counsel-replace-current-line (leading-spaces content)
  (beginning-of-line)
  (kill-line)
  (insert (concat leading-spaces content))
  (end-of-line))

(defun counsel-git-grep-complete-line ()
  (interactive)
  (let* (cmd
        (cur-line (buffer-substring-no-properties (line-beginning-position)
                                                  (line-end-position)))
        (default-directory (locate-dominating-file
                            default-directory ".git"))
        keyword
        (leading-spaces "")
        collection)
    (setq keyword (counsel-escape (if (region-active-p)
                                      (buffer-substring-no-properties (region-beginning)
                                                                      (region-end))
                                    (replace-regexp-in-string "^[ \t]*" "" cur-line))))
    ;; grep lines without leading/trailing spaces
    (setq cmd (format "git --no-pager grep -I -h --no-color -i -e \"^[ \\t]*%s\" | sed s\"\/^[ \\t]*\/\/\" | sed s\"\/[ \\t]*$\/\/\" | sort | uniq" keyword))
    (when (setq collection (split-string (shell-command-to-string cmd) "\n" t))
      (if (string-match "^\\([ \t]*\\)" cur-line)
          (setq leading-spaces (match-string 1 cur-line)))
      (cond
       ((= 1 (length collection))
        (counsel-replace-current-line leading-spaces (car collection)))
       ((> (length collection) 1)
        (ivy-read "lines:"
                  collection
                  :action (lambda (l)
                            (counsel-replace-current-line leading-spaces l))))))
    ))
(global-set-key (kbd "C-x C-l") 'counsel-git-grep-complete-line)

I also tried grep which is too slow for my project.

-1:-- Complete line with ivy-mode (Post Chen Bin)--L0--C0--May 23, 2016 01:41 AM

Mark Hershberger: Making git and Emacs’ eshell work together

tl;dr

I often end up in eshell.

Sometimes, because I’m running Emacs on Windows (where shells work, but it’s a pain), and sometimes just because.  The problem, until today, was that anytime I would invoke a git command that wanted to call the pager (say, diff), I would see the following annoying message from less:

WARNING: terminal is not fully functional
-  (press RETURN)

This happens because eshell sets $TERM to “dumb”.  It doesn’t try to fool anyone.  It’s dumb.

But, since I’m stubborn and lazy, I just put up with it the stupidity of eshell and the annoyance of git’s invocation of less.  Till today.

Some would say the answer is “Don’t use git in emacs — use magit!” And they’d be right. I do use magit, but commands should work, too.

So, after a spree of productivity yesterday, I woke up today and hit that annoying message again.  I decided to track it down.

I came across this StackOverflow thread. There is a hint there — I didn’t know less could be told to only page in certain cases — but not anything that says “only sometimes use less”.

So I managed to hack something together:

git config --global core.pager '`test "$TERM" = "dumb" && echo cat || echo less`'

I haven’t tried this under Windows, yet, but I’m hoping it works.

Image CC-by-SA: Richard Bartz, Munich Makro Freak

-1:-- Making git and Emacs’ eshell work together (Post hexmode)--L0--C0--May 22, 2016 04:30 PM

punchagan: blog-admin and Nikola

Another post about blogging.

blog-admin now supports Nikola, thanks to yours truly. blog-admin is an Emacs package by CodeFalling that lets you view and manage your (static site generated) blog from within inside Emacs.

Nikola's command line utility is pretty nifty and does a bunch of useful things. I had a few utility functions to do common tasks like create new post and deploy blog. This worked well, but moment I came across this blog-admin's tabular view, I was sold!

org2blog (a blogging tool I used previously) had a tracking file that kept track of all the posts I made, and I used it quite a bit for navigation – thanks to org-mode's search functionality. The tabular view of blog-admin is even better! I really like the fact that the author has tried to keep the package generic enough to support any blog, and adding support for Nikola has been quite easy.

The filtering functionality is crude, but good enough for a start. One thing I want to add is a preview functionality for drafts. Showing some (writing) statistics would also be nice – No. posts in the last month, total published posts, etc. No promises, but you may see some of these things, soon. :)

-1:-- blog-admin and Nikola (Post punchagan)--L0--C0--May 21, 2016 02:58 PM

Alex Schroeder: Renaming Files

I had a bunch of files named like this, from various albums.

Cure - [1984] The top - 01 - Shake dog shake.mp3
Cure - [1984] The top - 02 - Bird mad girl.mp3
Cure - [1984] The top - 03 - Wailing wall.mp3

I wanted to move them into subdirectories, one for each album.

(dolist (file (directory-files "c:/Users/asc/Music/The Cure" t "\\.mp3$"))
  (let* ((name (file-name-nondirectory file))
	 (data (split-string name " - "))
	 (album (nth 1 data)))
    (unless (file-directory-p (concat "c:/Users/asc/Music/The Cure/" album))
      (make-directory (concat "c:/Users/asc/Music/The Cure/" album)))
    (rename-file file (concat "c:/Users/asc/Music/The Cure/" album "/" name))))

Tags: RSS

-1:-- Renaming Files (Post)--L0--C0--May 18, 2016 07:26 AM

William Denton: Conforguring dotfiles

I’ve added my dotfiles to Conforguration: they are there as raw files in the dotfiles directory, and conforguration.org has code blocks that will put them in place on localhost or remote machines.

I did some general cleanup to the file as well. There’s a lot of duplication, which I think some metaprogramming might fix, but for now it works and does what I need. My .bashrc is now finally the same everywhere (custom settings go in .bash.$HOSTNAME.rc) which is a plus.

-1:-- Conforguring dotfiles (Post William Denton)--L0--C0--May 10, 2016 02:24 PM