CSE505/Assignment1/README.md
2024-02-23 14:09:42 -05:00

6.2 KiB

Assignment 1

Part A

Append

xsb
['A/append.P'].
suffix([1,2], [1,2,3,4]).
cut([1,2,3]).

Reach

generate input set

python randomgen.py [number of nodes]
xsb
['A/reach.P'].
timeReach.
Input size Way 1 Way 2 Way 3 Way 4
1000 0.0 0.0 0.765 0.766
2000 0.0 0.0 3.047 3.047
5000 0.0 0.0 18.828 18.953

There is huge impact if we write edge(X,Y) before reach. But when I try this in swi-prolog(there is no cputime/1, so I cannot table it), there is no significant different between different ways of implementation.

Transitive closure and cycle

xsb
['A/cycle.P'].
path(1,2) .
cycle(1).

N-queens

xsb
['A/queens.P'].
timeNQueen(8).
timeOneQueen(8).
Number of queens Time for 1 queen Time for all queens
8 0.0 0.047
10 0.0 0.25
12 0.0 7.796
14 0.015 298.719
16 0.094 Too long to run
18 0.485 Too long to run
20 2.796 Too long to run
22 29.407 Too long to run

I found somewhing interesting when I tried to use swi-prolog to program this question. The following code works in swi-prolog(not xsb):

attacks((Row1, Col1), (Row2, Col2)) :-
    Row1 =:= Row2;
    Col1 =:= Col2;
    abs(Row1 - Row2) =:= abs(Col1 - Col2).

no_attacks(_, []).
no_attacks(Queen, [OtherQueen|OtherQueens]) :-
    \+ attacks(Queen, OtherQueen),
    no_attacks(Queen, OtherQueens).

queen_positions(0, []).
queen_positions(N, [(N, Col)|Queens]) :-
    N > 0,
    N1 is N - 1,
    queen_positions(N1, Queens),
    member(Col, [1,2,3,4,5,6,7,8]).

legal_queens([]).
legal_queens([Queen|Queens]) :-
    legal_queens(Queens),
    no_attacks(Queen, Queens).

n_queens(N, Solution) :-
    queen_positions(N, Solution),
    legal_queens(Solution).

but it runs ridiculously slow(takes seconds to compute one solution of 8 queens).Then I tried to play with it and rearrange it a little bit and the answer I got (in A/queens.P) now runs much faster in swi-prolog and runs succesfully in XSB.

Part B

Reach

generate input set

python randomgen.py [number of nodes]
clingo --models 0 B/reach.lp
Input size Way 1 Way 2 Way 3 Way 4
10000 0.655s 0.703s 0.664s 0.665s
20000 1.376s 1.446s 1.290s 1.292s
50000 3.355s 3.501s 2.750s 1.645s
100000 6.778s 5.002s 3.375s 3.438s
200000 12.119s 7.514s 7.267s 7.021s

Though it looks like the Way4 is way better tha Way 1(Almost twice as fast), but if we let the computer to rest for a while and rerun them the other way(start from 4, then 3, follow by 2 and 1), we got

Input size Way 1 Way 2 Way 3 Way 4
200000 7.146s 7.610s 7.120s 12.103s

It is exactly the other way around! I belive it is because the input file becomes too large (42 MByte), so it waste a lot of time to load it to memory then cache, and the following ones has much higher cache hits rate, so the first one takes way longer than the others.
The runtime grows linearly respect to the input size, and there is no significant difference between different implementations.

N-queens

clingo --models 0 B/nqueens.lp
clingo --models 1 B/nqueens.lp
Number of queens Time for 1 queen Time for all queens
8 0.004s 0.006s
10 0.003s 0.086s
12 0.004s 4.909s
14 0.006s Too long to run
20 0.011s Too long to run
50 0.073s Too long to run
100 0.463s Too long to run
200 3.233s Too long to run
500 35.073s Too long to run

Part C

N-queens

Number of queens Max k
4 3
8 3
10 4
12 5
14 5
16 5
18 5
20 5

Extra Credit

I

python .\randompath.py 5000
xsb
['A/cycle.P'].
timePath.
Number of Nodes Way1 Way 2 Way 3
200 0.0 0.0 0.407
500 0.031 0.031 6.125
1000 0.109 0.093 50.75
2000 0.391 0.391 Too long to run
5000 2.484 3.093 Too long to run
10000 10.344 13.125 Too long to run
clingo Extra/cycle.lp| grep "^Time"
Number of Nodes Way1 Way 2 Way 3
200 0.041 0.041 1.284
500 0.271 0.264 20.948
1000 1.199 1.204 Too long to run
2000 5.683 5.364 Too long to run
5000 65.162 66.672 Too long to run
Runtime of the third way is much higher than the first two ways

II

See Extra/2.pl

Number of queens Max k
4 3
8 3
10 4
12 5
14 5
16 5
18 5
20 5