User:Korrawit/CleanupWikiLinks

From The Document Foundation Wiki
Jump to: navigation, search


Disclaimer: Some data may change as time goes on. And I've written this two weeks after, so something may be incorrect. :)

Since an upgrading of Mediawiki to version 1.17, there was a problem regarding hard-coded direct links to files, ones with cgi_img_auth.php. The problem is links are not usable now. So, I decided to clean this up by replacing with Media: namespace, which will give you direct link on-the-fly.

Tool

I've used Pywikipediabot.

Setting up

  1. Installation
  2. Some more setup
    1. You have 2 ways to create family file. I named it families/tdf_family.py.
      1. Run python generate_family_file.py. This will create it automatically.
      2. Manually create it.
    2. Create user-config.py as described here

This is my families/tdf_family.py.

# -*- coding: utf-8  -*-

import family

class Family(family.Family):
    def __init__(self):
        family.Family.__init__(self)
        self.name = 'tdf' # Set the family name; this should be the same as in the filename.
        self.langs = {
            'en': 'wiki.documentfoundation.org', # Put the hostname here.
        }

        self.namespaces[4] = {
            '_default': u'The Document Foundation Wiki', # Specify the project namespace here.
        }

        self.namespaces[5] = {
            '_default': u'The Document Foundation Wiki talk', # Specify the talk page of the project namespace here. 
        }

    def version(self, code):
        return "1.17.0"  # The MediaWiki version used. Not very important in most cases.

    def scriptpath(self, code):
        return '' # The relative path of index.php, api.php : look at your wiki address.

And this is my user-config.py.

mylang='en'
family = 'tdf'
usernames['tdf']['en']=u'KorrawitBot'
console_encoding = 'utf-8'

Process overview

  1. #Logging in to this wiki with login.py
  2. #Getting a list of all pages in article (main) namespace with pagegenerators.py
  3. #Replace hard-coded links using regular expression with replace.py

Everything is run from command line, at Pywikipediabot root folder.

Logging in

Just run

python login.py

and enter your password.

Getting a list of all pages

Use pagegenerators.py to list all pages. Run

python pagegenerators.py -start:! > pageall

-start:! means start at the beginning of list.

Oops! Problem occurred. The page Svn:keywords caused it, because the bot thinks it's in Svn namespace, but it isn't.

But that page doesn't have any hard-coded links to replace, just skip it. Looking at All pages starting at Svn:keywords, the next page is System Operations. So, run

python pagegenerators.py -start:"System Operations" > pageall2

to get the rest of list. Note that the ideal solution is to rename that page.

Next, combine two files and named pageall. It now looks like this:

   1: AST/Web Sites services
   2: Accessibility
   3: Accessibility/TextAttributes
   4: Adopt-o-meter
   5: BRX/Main Page
...

And have about thousand lines. We would like to have a list of wikilinks, so

sed 's/ *[0-9]\{1,4\}: \(.*\)/[[\1]]/' pageall > pageall_link

gives:

[[AST/Web Sites services]]
[[Accessibility]]
[[Accessibility/TextAttributes]]
[[Adopt-o-meter]]
[[BRX/Main Page]]
...

Explanation of regular expression used:

 *            # match a space zero or more times, that is the space before a number
[0-9]\{1,4\}  # equals [0-9]{1,4}, match a number 0-9 for one to four times, though the better expression is [0-9]+
:             # match literal ": ", a colon and space
\(.*\)        # equals (.*), match anything zero or more times, and capture it in a group which we'll reference it later by \1

Note that we have to escape {} and (), but not [].

Replace hard-coded links

Now we get a list of all pages in pageall_link. Use replace.py to make a fix.

The hard-coded links are in many formats. For example:

http://wiki.documentfoundation.org/cgi_img_auth.php/3/3e/LibreOffice_Initial_Icons_Bernhard_draft_0-1.svg
[http://wiki.documentfoundation.org/cgi_img_auth.php/c/c3/LibreOffice_Initial_Icons-Christoph.svg LibreOffice_Initial_Icons-Christoph.svg]

So, replacing ones with piping link first:

python replace.py -regex -ns:0 -v -summary:"change file direct-link to Media: namespace" -file:"pageall_link" \
"\[?http://wiki.documentfoundation.org/cgi_img_auth.php/[0-9a-f/]{5}([^ ]+)( )+([^\]\n]+)\]?" "[[Media:\1|\3]]"

Explanation of regular expressions used:

\[?           # match literal "[" for zero or one time
http://wiki.documentfoundation.org/cgi_img_auth.php/  # match this string
[0-9a-f/]{5}  # match 3/3e/ thingies. I guess it is hexadecimal, so [0-9a-f] is just fine.
(             # start capture group 1
  [^ ]+       #   match filename: anything except a space
)             # end capture group 1
( )+          # capture group 2, match one or more spaces between a link and text to show. There are some pages with more than 1 spaces.
              # IIRC, if I don't put a pair of bracket here, or just " +", it fails, but I don't know why.
(             # start capture group 3, match text to show
  [^          # match anything but not of following:
    \]        #   a literal "]", we will certainly match you at the end (another way to solve this is to use non-greedy match)
    \n        #   a newline
  ]+          # and repeat it one or more times
)             # end capture group 3
\]?           # match literal "]" for zero or one time

Next, fix another pattern: a bare link without [].

python replace.py -regex -ns:0 -v -summary:"change file direct-link to Media: namespace" -file:"pageall_link" \
"[^\[]http://wiki.documentfoundation.org/cgi_img_auth.php/[0-9a-f/]{5}(([^ ][^\n])+)" "[[Media:\1]]"

Explanation of regular expressions used:

[^            # match anything but not of following:
  \[          #   a literal "[",
]
http://wiki.documentfoundation.org/cgi_img_auth.php/[0-9a-f/]{5} # explained above
(             # start capture group 1, this matches filename
  (
    [^ ][^\n] # do not match a space or a newline
  )+
)             # end capture group 1

These two should covers most cases, but we would like to check again with very simple match:

python replace.py -ns:0 -v -file:"pageall_link" "cgi_img_auth.php" "" -save:"page_remaining" -always

This will save all remaining matched pages into page_remaining. And since we only -save it, there is no real edits and no harm, so just -always -save.

And check page_remaining again for some strange pattern. Pywikipediabot has an option to let we manually edit via browser anyway.

Summary

At first, I replaced links with File: namespace. But just noticed that Media: namespace should be more correct. So, result is at Special:Contributions/KorrawitBot.

And as suggested by Florian on website mailing-list, I wrote Help:Editing#Linking_to_Image_or_File

TODO

  • Patrol for new version (currently usable) of hard-coded links. Maybe automatically?

Thanks