Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 4 new columns ({'total_lines', 'first_object_keys_sorted', 'source_url', 'sha256_first_100_lines_bytes'}) and 7 missing columns ({'blob_id', 'score', 'path', 'length_bytes', 'repo_name', 'content', 'int_score'}).
This happened while the json dataset builder was generating data using
hf://datasets/dongbobo/annoy-datasync-rawcode-1k-summary/summary.json (at revision 58d1cb1a953ba8810ab464a9b2ef316566ac4aa2), [/tmp/hf-datasets-cache/medium/datasets/73335701257376-config-parquet-and-info-dongbobo-annoy-datasync-r-26575685/hub/datasets--dongbobo--annoy-datasync-rawcode-1k-summary/snapshots/58d1cb1a953ba8810ab464a9b2ef316566ac4aa2/sample_50.jsonl (origin=hf://datasets/dongbobo/annoy-datasync-rawcode-1k-summary@58d1cb1a953ba8810ab464a9b2ef316566ac4aa2/sample_50.jsonl), /tmp/hf-datasets-cache/medium/datasets/73335701257376-config-parquet-and-info-dongbobo-annoy-datasync-r-26575685/hub/datasets--dongbobo--annoy-datasync-rawcode-1k-summary/snapshots/58d1cb1a953ba8810ab464a9b2ef316566ac4aa2/summary.json (origin=hf://datasets/dongbobo/annoy-datasync-rawcode-1k-summary@58d1cb1a953ba8810ab464a9b2ef316566ac4aa2/summary.json)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 674, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
source_url: string
total_lines: int64
first_object_keys_sorted: list<item: string>
child 0, item: string
sha256_first_100_lines_bytes: string
to
{'blob_id': Value('string'), 'repo_name': Value('string'), 'path': Value('string'), 'length_bytes': Value('int64'), 'score': Value('float64'), 'int_score': Value('int64'), 'content': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1333, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 966, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1889, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 4 new columns ({'total_lines', 'first_object_keys_sorted', 'source_url', 'sha256_first_100_lines_bytes'}) and 7 missing columns ({'blob_id', 'score', 'path', 'length_bytes', 'repo_name', 'content', 'int_score'}).
This happened while the json dataset builder was generating data using
hf://datasets/dongbobo/annoy-datasync-rawcode-1k-summary/summary.json (at revision 58d1cb1a953ba8810ab464a9b2ef316566ac4aa2), [/tmp/hf-datasets-cache/medium/datasets/73335701257376-config-parquet-and-info-dongbobo-annoy-datasync-r-26575685/hub/datasets--dongbobo--annoy-datasync-rawcode-1k-summary/snapshots/58d1cb1a953ba8810ab464a9b2ef316566ac4aa2/sample_50.jsonl (origin=hf://datasets/dongbobo/annoy-datasync-rawcode-1k-summary@58d1cb1a953ba8810ab464a9b2ef316566ac4aa2/sample_50.jsonl), /tmp/hf-datasets-cache/medium/datasets/73335701257376-config-parquet-and-info-dongbobo-annoy-datasync-r-26575685/hub/datasets--dongbobo--annoy-datasync-rawcode-1k-summary/snapshots/58d1cb1a953ba8810ab464a9b2ef316566ac4aa2/summary.json (origin=hf://datasets/dongbobo/annoy-datasync-rawcode-1k-summary@58d1cb1a953ba8810ab464a9b2ef316566ac4aa2/summary.json)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
blob_id
string | repo_name
string | path
string | length_bytes
int64 | score
float64 | int_score
int64 | content
string |
|---|---|---|---|---|---|---|
b7b944e5d5dd1cd1b41952c55bc13c348f905024
|
Oyekunle-Mark/Graphs
|
/projects/ancestor/ancestor.py
| 1,360
| 3.671875
| 4
|
from graph import Graph
from util import Stack
def earliest_ancestor(ancestors, starting_node):
# FIRST REPRESENT THE INPUT ANCESTORS AS A GRAPH
# create a graph instance
graph = Graph()
# loop through ancestors and add every number as a vertex
for parent, child in ancestors:
# add the parent as a vertex
graph.add_vertex(parent)
# add the child as a vertex as well
graph.add_vertex(child)
# # loop through ancestors and build the connections
for parent, child in ancestors:
# connect the parent to the child
# the connection is reversed because dft transverses downward
graph.add_edge(child, parent)
# if starting node has no child
if not graph.vertices[starting_node]:
# return -1
return -1
# create a stack to hold the vertices
s = Stack()
# add the starting_node to the stack
s.push(starting_node)
# set earliest_anc to -1
earliest_anc = -1
# loop while stack is not empty
while s.size() > 0:
# pop the stack
vertex = s.pop()
# set the earliest_anc to vertex
earliest_anc = vertex
# add all its connected vertices to the queue
# sort the vertices maintain order
for v in sorted(graph.vertices[vertex]):
s.push(v)
return earliest_anc
|
150d0efefb3c712edc14a5ff039ef2082c43152b
|
syurskyi/Python_Topics
|
/125_algorithms/_exercises/templates/_algorithms_challenges/leetcode/LeetCode_with_solution/111_Minimum_Depth_of_Binary_Tree.py
| 1,281
| 4.03125
| 4
|
# Definition for a binary tree node.
# class TreeNode(object):
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
c_ Solution o..
# def minDepth(self, root):
# """
# :type root: TreeNode
# :rtype: int
# """
# # Recursion
# if root is None:
# return 0
# ld = self.minDepth(root.left)
# rd = self.minDepth(root.right)
# if ld != 0 and rd != 0:
# # handle 0 case!
# return 1 + min(ld, rd)
# return 1 + ld +rd
___ minDepth root
# BFS
__ root is N..:
r_ 0
queue = [root]
depth, rightMost = 1, root
w.. l.. queue) > 0:
node = queue.pop(0)
__ node.left is N.. a.. node.right is N..:
______
__ node.left is n.. N..:
queue.a.. node.left)
__ node.right is n.. N..:
queue.a.. node.right)
__ node __ rightMost:
# reach the current level end
depth += 1
__ node.right is n.. N..:
rightMost = node.right
____
rightMost = node.left
r_ depth
|
039dd3727cdd229548d94108ee220efa4a5b4838
|
mertdemirlicakmak/project_euler
|
/problem_5.py
| 1,144
| 4.0625
| 4
|
""""2520 is the smallest number that can be divided by each of the numbers
from 1 to 10 without any remainder.
What is the smallest positive number that is evenly
divisible by all of the numbers from 1 to 20?"""
# This function returns the smallest number that is divisible to
# 1 to num
def find_smallest_divisible(num):
multi_list = []
result_dict = {}
result = 1
multi_list.extend(range(2, num + 1))
for num in multi_list:
prime_list = find_prime_factors(num)
for prime in prime_list:
if not (prime in result_dict):
result_dict[prime] = 0
if result_dict[prime] < prime_list.count(prime):
result_dict[prime] += 1
result *= prime
return result
# This functions returns the prime factors of num
def find_prime_factors(num):
temp = num
result = []
while temp > 1:
for number in range(2, temp + 1):
if temp % number == 0:
result.append(number)
temp //= number
break
return result
if __name__ == '__main__':
print(find_smallest_divisible(20))
|
0437daed01bce0f5a3046616917a2c29a1ed15d0
|
zhouyuhangnju/freshLeetcode
|
/Combination Sum.py
| 1,252
| 3.71875
| 4
|
def combinationSum(candidates, target):
"""
:type candidates: List[int]
:type target: int
:rtype: List[List[int]]
"""
res = []
candidates = sorted(candidates)
def combinationRemain(remain, curr_res):
if remain == 0:
res.append(curr_res)
return
for c in candidates:
if c > remain:
break
if curr_res and c < curr_res[-1]:
continue
combinationRemain(remain - c, curr_res + [c])
combinationRemain(target, [])
return res
def combinationSum2(candidates, target):
"""
:type candidates: List[int]
:type target: int
:rtype: List[List[int]]
"""
res = []
candidates = sorted(candidates)
def combinationRemain(remain, curr_res, curr_idx):
if remain == 0:
res.append(curr_res)
return
if remain < 0 or curr_idx >= len(candidates):
return
combinationRemain(remain-candidates[curr_idx], curr_res+[candidates[curr_idx]], curr_idx)
combinationRemain(remain, curr_res, curr_idx + 1)
combinationRemain(target, [], 0)
return res
if __name__ == '__main__':
print combinationSum2([2, 3, 6, 7], 7)
|
246130cc6d8d3b7c3fee372774aa2f93018f4e35
|
alexleversen/AdventOfCode
|
/2020/08-2/main.py
| 1,119
| 3.625
| 4
|
import re
file = open('input.txt', 'r')
lines = list(file.readlines())
def testTermination(swapLine):
if(lines[swapLine][0:3] == 'acc'):
return False
instructionIndex = 0
instructionsVisited = []
acc = 0
while(True):
if(instructionIndex == len(lines)):
return acc
if(instructionIndex in instructionsVisited):
return False
instructionsVisited.append(instructionIndex)
match = re.match('(\w{3}) ([+-]\d+)', lines[instructionIndex])
instruction, value = match.group(1, 2)
if(instructionIndex == swapLine):
if(instruction == 'jmp'):
instruction = 'nop'
elif(instruction == 'nop'):
instruction = 'jmp'
if(instruction == 'acc'):
acc += int(value)
instructionIndex += 1
elif(instruction == 'jmp'):
instructionIndex += int(value)
else:
instructionIndex += 1
for i in range(len(lines)):
terminationValue = testTermination(i)
if(terminationValue != False):
print(terminationValue)
|
f9af5624fe12b3c6bd0c359e5162b7f9f48234e7
|
Yarin78/yal
|
/python/yal/fenwick.py
| 1,025
| 3.765625
| 4
|
# Datastructure for storing and updating integer values in an array in log(n) time
# and answering queries "what is the sum of all value in the array between 0 and x?" in log(n) time
#
# Also called Binary Indexed Tree (BIT). See http://codeforces.com/blog/entry/619
class FenwickTree:
def __init__(self, exp):
'''Creates a FenwickTree with range 0..(2^exp)-1'''
self.exp = exp
self.t = [0] * 2 ** (exp+1)
def query_range(self, x, y):
'''Gets the sum of the values in the range [x, y)'''
return self.query(y) - self.query(x)
def query(self, x, i=-1):
'''Gets the sum of the values in the range [0, x).'''
if i < 0:
i = self.exp
return (x&1) * self.t[(1<<i)+x-1] + self.query(x//2, i-1) if x else 0
def insert(self, x, v, i=-1):
'''Adds the value v to the position x'''
if i < 0:
i = self.exp
self.t[(1<<i)+x] += v
return self.t[(1<<i)+x] + (self.insert(x//2, v, i-1) if i > 0 else 0)
|
25ce186e86fc56201f52b12615caa13f98044d99
|
travisoneill/project-euler
|
/python/007.py
| 327
| 3.953125
| 4
|
from math import sqrt
def is_prime(n):
if n < 2: return False
for i in range(2, int(sqrt(n)) + 1):
if n % i == 0:
return False
return True
def nth_prime(n):
count = 2
i = 3
while count < n:
i+=2
if is_prime(i):
count += 1
print(i)
nth_prime(10001)
|
5bb4299f898d7a3957d4a0fd1ed4eb151ab44b47
|
efeacer/EPFL_ML_Labs
|
/Lab04/template/least_squares.py
| 482
| 3.5625
| 4
|
# -*- coding: utf-8 -*-
"""Exercise 3.
Least Square
"""
import numpy as np
def compute_error_vector(y, tx, w):
return y - tx.dot(w)
def compute_mse(error_vector):
return np.mean(error_vector ** 2) / 2
def least_squares(y, tx):
coefficient_matrix = tx.T.dot(tx)
constant_vector = tx.T.dot(y)
w = np.linalg.solve(coefficient_matrix, constant_vector)
error_vector = compute_error_vector(y, tx, w)
loss = compute_mse(error_vector)
return w, loss
|
0a186e9f92527a8509f7082f2f4065d3b366e957
|
vinnyatanasov/naive-bayes-classifier
|
/nb.py
| 3,403
| 3.6875
| 4
|
"""
Naive Bayes classifier
- gets reviews as input
- counts how many times words appear in pos/neg
- adds one to each (to not have 0 probabilities)
- computes likelihood and multiply by prior (of review being pos/neg) to get the posterior probability
- in a balanced dataset, prior is the same for both, so we ignore it here
- chooses highest probability to be prediction
"""
import math
def test(words, probs, priors, file_name):
label = 1 if "pos" in file_name else -1
count = 0
correct = 0
with open(file_name) as file:
for line in file:
# begin with prior (simply how likely it is to be pos/neg before evidence)
pos = priors[0]
neg = priors[1]
# compute likelihood
# sum logs, better than multiplying very small numbers
for w in line.strip().split():
# if word wasn't in train data, then we have to ignore it
# same effect if we add test words into corpus and gave small probability
if w in words:
pos += math.log(probs[w][0])
neg += math.log(probs[w][1])
# say it's positive if pos >= neg
pred = 1 if pos >= neg else -1
# increment counters
count += 1
if pred == label:
correct += 1
# return results
return 100*(correct/float(count))
def main():
# count number of occurances of each word in pos/neg reviews
# we'll use a dict containing a two item list [pos count, neg count]
words = {}
w_count = 0 # words
p_count = 0 # positive instances
n_count = 0 # negative instances
# count positive occurrences
with open("data/train.positive") as file:
for line in file:
for word in line.strip().split():
try:
words[word][0] += 1
except:
words[word] = [1, 0]
w_count += 1
p_count += 1
# count negative occurrences
with open("data/train.negative") as file:
for line in file:
for word in line.strip().split():
try:
words[word][1] += 1
except:
words[word] = [0, 1]
w_count += 1
n_count += 1
# calculate probabilities of each word
corpus = len(words)
probs = {}
for key, value in words.iteritems():
# smooth values (add one to each)
value[0]+=1
value[1]+=1
# prob = count / total count + number of words (for smoothing)
p_pos = value[0] / float(w_count + corpus)
p_neg = value[1] / float(w_count + corpus)
probs[key] = [p_pos, p_neg]
# compute priors based on frequency of reviews
priors = []
priors.append(math.log(p_count / float(p_count + n_count)))
priors.append(math.log(n_count / float(p_count + n_count)))
# test naive bayes
pos_result = test(words, probs, priors, "data/test.positive")
neg_result = test(words, probs, priors, "data/test.negative")
print "Accuracy(%)"
print "Positive:", pos_result
print "Negative:", neg_result
print "Combined:", (pos_result+neg_result)/float(2)
if __name__ == "__main__":
print "-- Naive Bayes classifier --\n"
main()
|
ceca83f8d1a6d0dbc027ad04a7632bb8853bc17f
|
harshablast/numpy_NN
|
/nn.py
| 2,183
| 3.734375
| 4
|
import numpy as np
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def d_sigmoid(x):
return x * (1 - x)
def relu(x):
return x * (x > 0)
def d_relu(x):
return 1 * (x > 0)
class neural_network:
def __init__(self, nodes):
self.input_dim = nodes[0]
self.HL01_dim = nodes[1]
self.HL02_dim = nodes[2]
self.output_dim = nodes[3]
self.W1 = 2 * (np.random.rand(self.input_dim, self.HL01_dim) -1)
self.W2 = 2 * (np.random.rand(self.HL01_dim, self.HL02_dim) -1)
self.W3 = 2 * (np.random.rand(self.HL02_dim, self.output_dim) -1)
self.B1 = 2 * (np.random.rand(1, self.HL01_dim))
self.B2 = 2 * (np.random.rand(1, self.HL02_dim))
self.B3 = 2 * (np.random.rand(1, self.output_dim))
def forward_pass(self, input):
self.HL01_out = sigmoid(np.add(np.matmul(input, self.W1), self.B1))
self.HL02_out = sigmoid(np.add(np.matmul(self.HL01_out, self.W2), self.B2))
self.output = sigmoid(np.add(np.matmul(self.HL02_out, self.W3), self.B3))
return self.output
def backward_pass(self, train_data_X, train_data_Y, iterations, learning_rate):
for j in range(iterations):
self.forward_pass(train_data_X)
error = np.sum(np.square(self.output - train_data_Y))
print(error)
output_error = self.output - train_data_Y
output_deltas = output_error * d_sigmoid(self.output)
self.W3 -= np.dot(self.HL02_out.T, output_deltas) * learning_rate
self.B3 -= np.sum(output_deltas, axis=0, keepdims=True) * learning_rate
HL02_error = np.dot(output_deltas, self.W3.T)
HL02_deltas = HL02_error * d_sigmoid(self.HL02_out)
self.W2 -= np.dot(self.HL01_out.T, HL02_deltas) * learning_rate
self.B2 -= np.sum(HL02_deltas, axis=0, keepdims=True) * learning_rate
HL01_error = np.dot(HL02_deltas, self.W2.T)
HL01_deltas = HL01_error * d_sigmoid(self.HL01_out)
self.W1 -= np.dot(train_data_X.T, HL01_deltas) * learning_rate
self.B1 -= np.sum(HL01_deltas, axis=0, keepdims=True) * learning_rate
|
235fee728f7853aa65b05a101deeb1dfeb5ebf8f
|
syurskyi/Python_Topics
|
/125_algorithms/_examples/_algorithms_challenges/leetcode/leetCode/DynamicProgramming/198_HouseRobber.py
| 469
| 3.609375
| 4
|
#! /usr/bin/env python
# -*- coding: utf-8 -*-
class Solution(object):
# Dynamic Programming
def rob(self, nums):
if not nums:
return 0
pre_rob = 0
pre_not_rob = 0
for num in nums:
cur_rob = pre_not_rob + num
cur_not_rob = max(pre_rob, pre_not_rob)
pre_rob = cur_rob
pre_not_rob = cur_not_rob
return max(pre_rob, pre_not_rob)
"""
[]
[1,2]
[12, 1,1,12,1]
"""
|
0d0892bf443e39c3c5ef078f2cb846370b7852e9
|
JakobLybarger/Graph-Pathfinding-Algorithms
|
/dijkstras.py
| 1,297
| 4.15625
| 4
|
import math
import heapq
def dijkstras(graph, start):
distances = {} # Dictionary to keep track of the shortest distance to each vertex in the graph
# The distance to each vertex is not known so we will just assume each vertex is infinitely far away
for vertex in graph:
distances[vertex] = math.inf
distances[start] = 0 # Distance from the first point to the first point is 0
vertices_to_explore = [(0, start)]
# Continue while heap is not empty
while vertices_to_explore:
distance, vertex = heapq.heappop(vertices_to_explore) # Pop the minimum distance vertex off of the heap
for neighbor, e_weight in graph[vertex]:
new_distance = distance + e_weight
# If the new distance is less than the current distance set the current distance as new distance
if new_distance < distances[neighbor]:
distances[neighbor] = new_distance
heapq.heappush(vertices_to_explore, (new_distance, neighbor))
return distances # The dictionary of minimum distances from start to each vertex
graph = {
'A': [('B', 10), ('C', 3)],
'C': [('D', 2)],
'D': [('E', 10)],
'E': [('A', 7)],
'B': [('C', 3), ('D', 2)]
}
print(dijkstras(graph, "A"))
|
c3626ea1efb1c930337e261be165d048d842d15a
|
Razorro/Leetcode
|
/72. Edit Distance.py
| 3,999
| 3.515625
| 4
|
"""
Given two words word1 and word2, find the minimum number of operations required to convert word1 to word2.
You have the following 3 operations permitted on a word:
Insert a character
Delete a character
Replace a character
Example 1:
Input: word1 = "horse", word2 = "ros"
Output: 3
Explanation:
horse -> rorse (replace 'h' with 'r')
rorse -> rose (remove 'r')
rose -> ros (remove 'e')
Example 2:
Input: word1 = "intention", word2 = "execution"
Output: 5
Explanation:
intention -> inention (remove 't')
inention -> enention (replace 'i' with 'e')
enention -> exention (replace 'n' with 'x')
exention -> exection (replace 'n' with 'c')
exection -> execution (insert 'u')
A good question! I thought it was similiar with the dynamic programming example in CLRS,
it turns out the skeleton may has a little similiarity, but the core idea can't be extracted
with the situation of this question.
It was called Levenshtein distance.
Mathematically, the Levenshtein distance between two strings {\displaystyle a,b} a,b (of length {\displaystyle |a|} |a| and {\displaystyle |b|} |b| respectively) is given by {\displaystyle \operatorname
{lev} _{a,b}(|a|,|b|)} \operatorname{lev}_{a,b}(|a|,|b|) where
| --- max(i, j) if min(i, j) = 0
lev(i, j) = | min --- lev(i-1, j) + 1
| --- lev(i, j-1) + 1
| --- lev(i-1, j-1) + 1
Computing the Levenshtein distance is based on the observation that if we reserve a matrix to hold the Levenshtein distances
between all prefixes of the first string and all prefixes of the second, then we can compute the values in the matrix in
a dynamic programming fashion, and thus find the distance between the two full strings as the last value computed.
This algorithm, an example of bottom-up dynamic programming, is discussed, with variants, in the 1974 article The
String-to-string correction problem by Robert A. Wagner and Michael J. Fischer.[4]
"""
class Solution:
def minDistance(self, word1: 'str', word2: 'str') -> 'int':
points = self.findBiggestCommon(word1, word2)
def findBiggestCommon(self, source, target):
path = [0] * len(source)
directions = []
for i in range(len(target)):
current = [0] * len(source)
d = []
for j in range(len(source)):
if target[i] == source[j]:
current[j] = path[j-1] + 1 if j-1 >= 0 else 1
d.append('=')
else:
left = current[j-1] if j-1 >= 0 else 0
if left > path[j]:
d.append('l')
else:
d.append('u')
current[j] = max(left, path[j])
path = current
directions.append(d)
x_y = []
row, col = len(target)-1, len(source)-1
while row >= 0 and col >=0:
if directions[row][col] == '=':
x_y.append((row, col))
row -= 1
col -= 1
elif directions[row][col] == 'u':
row -= 1
else:
col -= 1
return x_y
def standardAnswer(self, word1, word2):
m = len(word1) + 1
n = len(word2) + 1
det = [[0 for _ in range(n)] for _ in range(m)]
for i in range(m):
det[i][0] = i
for i in range(n):
det[0][i] = i
for i in range(1, m):
for j in range(1, n):
det[i][j] = min(det[i][j - 1] + 1, det[i - 1][j] + 1, det[i - 1][j - 1] +
0 if word1[i - 1] == word2[j - 1] else 1)
return det[m - 1][n - 1]
if __name__ == '__main__':
s = Solution()
distance = s.findBiggestCommon('horse', 'ros')
distance = sorted(distance, key=lambda e: e[1])
c = 0
trans = 0
for left, right in distance:
trans += abs(right - left) + left-c
c = left + 1
print(s.findBiggestCommon('horse', 'ros'))
|
b3a6e632568dd13f128eda2cba96293e2bd0d3cd
|
sniperswang/dev
|
/leetcode/L265/test.py
| 1,452
| 3.640625
| 4
|
"""
There are a row of n houses, each house can be painted with one of the k colors.
The cost of painting each house with a certain color is different. You have to paint all the houses such that no two adjacent houses have the same color.
The cost of painting each house with a certain color is represented by a n x k cost matrix.
For example, costs[0][0] is the cost of painting house 0 with color 0; costs[1][2] is the cost of painting house 1 with color 2, and so on... Find the minimum cost to paint all houses.
Note:
All costs are positive integers.
Follow up:
Could you solve it in O(nk) runtime?
"""
class Solution(object):
def minCostII(self, costs):
"""
:type costs: List[List[int]]
:rtype: int
"""
if len(costs) == 0:
return 0
m = len(costs)
n = len(costs[0])
for i in range (1, m):
preMin = {}
preMin[0] = min(costs[i-1][1:])
costs[i][0] = costs[i][0] + preMin[0]
if ( n > 1):
preMin[n-1] = min(costs[i-1][:n-1])
costs[i][n-1] = costs[i][n-1] + preMin[n-1]
for j in range (1, n-1):
preMin[j] = min( min(costs[i-1][:j]), min(costs[i-1][j+1:]) )
costs[i][j] = costs[i][j] + preMin[j]
return min(costs[len(costs)-1])
costa = [1,2,4]
costb = [3,1,0]
costc = [1,2,1]
costs = []
costs.append(costa)
costs.append(costb)
costs.append(costc)
s = Solution()
print s.minCostII(costs)
|
a1b41adcda2d3b3522744e954cf8ae2f901c6b01
|
drunkwater/leetcode
|
/medium/python3/c0099_209_minimum-size-subarray-sum/00_leetcode_0099.py
| 798
| 3.546875
| 4
|
# DRUNKWATER TEMPLATE(add description and prototypes)
# Question Title and Description on leetcode.com
# Function Declaration and Function Prototypes on leetcode.com
#209. Minimum Size Subarray Sum
#Given an array of n positive integers and a positive integer s, find the minimal length of a contiguous subarray of which the sum ≥ s. If there isn't one, return 0 instead.
#For example, given the array [2,3,1,2,4,3] and s = 7,
#the subarray [4,3] has the minimal length under the problem constraint.
#click to show more practice.
#Credits:
#Special thanks to @Freezen for adding this problem and creating all test cases.
#class Solution:
# def minSubArrayLen(self, s, nums):
# """
# :type s: int
# :type nums: List[int]
# :rtype: int
# """
# Time Is Money
|
40a908c9b3cf99674e66b75f56809d485f0a81f9
|
Gborgman05/algs
|
/py/populate_right_pointers.py
| 1,019
| 3.859375
| 4
|
"""
# Definition for a Node.
class Node:
def __init__(self, val: int = 0, left: 'Node' = None, right: 'Node' = None, next: 'Node' = None):
self.val = val
self.left = left
self.right = right
self.next = next
"""
class Solution:
def connect(self, root: 'Optional[Node]') -> 'Optional[Node]':
saved = root
levels = []
l = [root]
n = []
while l:
n = []
for node in l:
if node:
if node.left:
n.append(node.left)
if node.right:
n.append(node.right)
levels.append(l)
l = n
for level in levels:
for i in range(len(level)):
if level[i] == None:
continue
if i < len(level) - 1:
level[i].next = level[i+1]
else:
level[i].next = None
return root
|
c2f622f51bbddc54b0199c4e0e2982bc2ebfa030
|
qdm12/courses
|
/Fundamental Algorithms/Lesson-03/algorithms.py
| 2,479
| 3.875
| 4
|
from operator import itemgetter
from math import floor
def radix_sort_alpha(words):
l = len(words[0])
for w in words:
if len(w) != l:
raise Exception("All words should be of same length")
for i in range(l, 0, -1):
words = sorted(words, key=itemgetter(i - 1))
words_str = str([''.join(w) for w in words])
print "PASS "+str(l - i + 1)+": "+words_str
return words_str
def bucket_sort(A):
print "Initial input array A: "+str(A)
n = len(A)
for i in range(n):
assert(A[i] >= 0 and A[i] < 1)
B = [[] for _ in range(n)]
print "Initial output buckets array B: "+str(B)
for i in range(n):
place = int(floor(A[i] * n))
B[place].append(A[i])
print "Output buckets array B with elements in buckets: "+str(B)
for j in range(n):
B[j].sort()
print "Output buckets array B with elements sorted in buckets: "+str(B)
B_final = []
for bucket in B:
B_final += bucket
print "Final output array B: "+str(B_final)
return B_final
class MergeSort(object):
def merge(self, A, l, q, r):
n1 = q - l + 1
n2 = r - q
L = [A[l + i] for i in range(n1)]
R = [A[q + 1 + i] for i in range(n2)]
i = j = 0 # Initial index of first and second subarrays
k = l # Initial index of merged subarray
while i < n1 and j < n2:
if L[i] <= R[j]:
A[k] = L[i]
i += 1
else:
A[k] = R[j]
j += 1
k += 1
# Copy the remaining elements of L[], if there are any
while i < n1:
A[k] = L[i]
i += 1
k += 1
# Copy the remaining elements of R[], if there are any
while j < n2:
A[k] = R[j]
j += 1
k += 1
def mergeSort(self, A, l, r):
if l < r:
q = int(floor((l+r)/2))
self.mergeSort(A, l, q)
self.mergeSort(A, q+1, r)
self.merge(A, l, q, r)
def run(self):
A = [54,26,93,17,77,31,44,55,20]
self.mergeSort(A, 0, len(A) - 1)
print A
if __name__ == "__main__":
radix_sort_alpha(["COW", "DOG", "SEA", "RUG", "ROW", "MOB", "BOX", "TAB", "BAR", "EAR", "TAR", "DIG", "BIG", "TEA", "NOW", "FOX"])
bucket_sort([.79,.13,.16,.64,.39,.20,.89,.53,.71,.43])
m = MergeSort()
m.run()
|
baa6b0b8905dfc9e832125196f3503f271557273
|
syurskyi/Python_Topics
|
/125_algorithms/_exercises/templates/_algorithms_challenges/leetcode/leetCode/Array/SlidingWindowMaximum.py
| 2,656
| 4.03125
| 4
|
"""
Given an array nums, there is a sliding window of size k which is moving from the very left of the array to the very right. You can only see the k numbers in the window. Each time the sliding window moves right by one position. Return the max sliding window.
Example:
Input: nums = [1,3,-1,-3,5,3,6,7], and k = 3
Output: [3,3,5,5,6,7]
Explanation:
Window position Max
--------------- -----
[1 3 -1] -3 5 3 6 7 3
1 [3 -1 -3] 5 3 6 7 3
1 3 [-1 -3 5] 3 6 7 5
1 3 -1 [-3 5 3] 6 7 5
1 3 -1 -3 [5 3 6] 7 6
1 3 -1 -3 5 [3 6 7] 7
Note:
You may assume k is always valid, 1 ≤ k ≤ input array's size for non-empty array.
Follow up:
Could you solve it in linear time?
这个我的思路是:
1. 先把前 k 个数取出来,然后排序一组,不排序一组。
2. 排序的一组作为查找使用。 不排序的一组作为删除增加会用。
3. 这里也可以使用堆代替排序,红黑树应该最好不过了。
4. 这里使用排序过的列表是为了能够使用二分法,从而达到 log n 级别的查找和后续添加。
但同时因为即使在 log n级别查找到要添加删除的位置,进行列表的添加和删除仍然是一个 O(n) 级别的事情...
所以使用堆或者红黑树是最好的,添加和删除都是 log n 级别的。
5. sorted list 主要是进行获取最大与删除冗余,这里使用二分法来删除冗余。
6. unsorted list 用于知道要删除和添加的都是哪一个。
beat 31% 176ms.
测试地址:
https://leetcode.com/problems/sliding-window-maximum/description/
"""
from collections import deque
import bisect
c.. Solution o..
___ find_bi nums, target
lo = 0
hi = l..(nums)
_____ lo < hi:
mid = (lo + hi) // 2
__ nums[mid] __ target:
r_ mid
__ nums[mid] < target:
lo = mid + 1
____
hi = mid
___ maxSlidingWindow nums, k
"""
:type nums: List[int]
:type k: int
:rtype: List[int]
"""
__ n.. nums:
r_ []
x = nums[:k]
y = s..(x)
x = deque(x)
maxes = m..(x)
result = [maxes]
___ i __ nums[k:]:
pop = x.popleft()
x.a.. i)
index = self.find_bi(y, pop)
y.pop(index)
bisect.insort_left(y, i)
result.a.. y[-1])
r_ result
|
8496596aefa39873f8321a61d361bf209e54dcbd
|
syurskyi/Python_Topics
|
/125_algorithms/_exercises/templates/_algorithms_challenges/leetcode/LeetCode_with_solution/108_Convert_Sorted_Array_to_Binary_Search_Tree.py
| 977
| 3.859375
| 4
|
# Definition for a binary tree node.
# class TreeNode(object):
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
c_ Solution o..
# def sortedArrayToBST(self, nums):
# """
# :type nums: List[int]
# :rtype: TreeNode
# """
# # Recursion with slicing
# if not nums:
# return None
# mid = len(nums) / 2
# root = TreeNode(nums[mid])
# root.left = self.sortedArrayToBST(nums[:mid])
# root.right = self.sortedArrayToBST(nums[mid + 1:])
# return root
___ sortedArrayToBST nums
# Recursion with index
r_ getHelper(nums, 0, l.. nums) - 1)
___ getHelper nums, start, end
__ start > end:
r_ N..
mid = (start + end) / 2
node = TreeNode(nums[mid])
node.left = getHelper(nums, start, mid - 1)
node.right = getHelper(nums, mid + 1, end)
r_ node
|
0be537def5f8cc9ba9218267bf774b28ee44d4c7
|
SoumyaMalgonde/AlgoBook
|
/python/graph_algorithms/Dijkstra's_Shortest_Path_Implementation_using_Adjacency_List.py
| 2,933
| 3.921875
| 4
|
class Node_Distance :
def __init__(self, name, dist) :
self.name = name
self.dist = dist
class Graph :
def __init__(self, node_count) :
self.adjlist = {}
self.node_count = node_count
def Add_Into_Adjlist(self, src, node_dist) :
if src not in self.adjlist :
self.adjlist[src] = []
self.adjlist[src].append(node_dist)
def Dijkstras_Shortest_Path(self, source) :
# Initialize the distance of all the nodes from source to infinity
distance = [999999999999] * self.node_count
# Distance of source node to itself is 0
distance[source] = 0
# Create a dictionary of { node, distance_from_source }
dict_node_length = {source: 0}
while dict_node_length :
# Get the key for the smallest value in the dictionary
# i.e Get the node with the shortest distance from the source
source_node = min(dict_node_length, key = lambda k: dict_node_length[k])
del dict_node_length[source_node]
for node_dist in self.adjlist[source_node] :
adjnode = node_dist.name
length_to_adjnode = node_dist.dist
# Edge relaxation
if distance[adjnode] > distance[source_node] + length_to_adjnode :
distance[adjnode] = distance[source_node] + length_to_adjnode
dict_node_length[adjnode] = distance[adjnode]
for i in range(self.node_count) :
print("Source Node ("+str(source)+") -> Destination Node(" + str(i) + ") : " + str(distance[i]))
def main() :
g = Graph(6)
# Node 0: <1,5> <2,1> <3,4>
g.Add_Into_Adjlist(0, Node_Distance(1, 5))
g.Add_Into_Adjlist(0, Node_Distance(2, 1))
g.Add_Into_Adjlist(0, Node_Distance(3, 4))
# Node 1: <0,5> <2,3> <4,8>
g.Add_Into_Adjlist(1, Node_Distance(0, 5))
g.Add_Into_Adjlist(1, Node_Distance(2, 3))
g.Add_Into_Adjlist(1, Node_Distance(4, 8))
# Node 2: <0,1> <1,3> <3,2> <4,1>
g.Add_Into_Adjlist(2, Node_Distance(0, 1))
g.Add_Into_Adjlist(2, Node_Distance(1, 3))
g.Add_Into_Adjlist(2, Node_Distance(3, 2))
g.Add_Into_Adjlist(2, Node_Distance(4, 1))
# Node 3: <0,4> <2,2> <4,2> <5,1>
g.Add_Into_Adjlist(3, Node_Distance(0, 4))
g.Add_Into_Adjlist(3, Node_Distance(2, 2))
g.Add_Into_Adjlist(3, Node_Distance(4, 2))
g.Add_Into_Adjlist(3, Node_Distance(5, 1))
# Node 4: <1,8> <2,1> <3,2> <5,3>
g.Add_Into_Adjlist(4, Node_Distance(1, 8))
g.Add_Into_Adjlist(4, Node_Distance(2, 1))
g.Add_Into_Adjlist(4, Node_Distance(3, 2))
g.Add_Into_Adjlist(4, Node_Distance(5, 3))
# Node 5: <3,1> <4,3>
g.Add_Into_Adjlist(5, Node_Distance(3, 1))
g.Add_Into_Adjlist(5, Node_Distance(4, 3))
g.Dijkstras_Shortest_Path(0)
print("\n")
g.Dijkstras_Shortest_Path(5)
if __name__ == "__main__" :
main()
|
8130b1edf4df29a9ab76784289a22d5fb90863e7
|
ridhishguhan/faceattractivenesslearner
|
/Classify.py
| 1,158
| 3.6875
| 4
|
import numpy as np
import Utils
class Classifier:
training = None
train_arr = None
classes = None
def __init__(self, training, train_arr, CLASSES = 3):
self.training = training
self.train_arr = train_arr
self.classes = CLASSES
#KNN Classification method
def OneNNClassify(self, test_set, K):
# KNN Method
# for each test sample t
# for each training sample tr
# compute norm |t - tr|
# choose top norm
# class which it belongs to is classification
[tr,tc] = test_set.shape
[trr,trc] = self.train_arr.shape
result = np.array(np.zeros([tc]))
i = 0
#print "KNN : with K = ",K
while i < tc:
x = test_set[:,i]
xmat = np.tile(x,(1,trc))
xmat = xmat - self.train_arr
norms = Utils.ComputeNorm(xmat)
closest_train = np.argmin(norms)
which_train = self.training[closest_train]
attr = which_train.attractiveness
result[i] = attr
#print "Class : ",result[i]
i += 1
return result
|
fa0a2e8e0ec8251c6d735b02dfa1d7a94e09c6b2
|
paul0920/leetcode
|
/question_leetcode/1488_2.py
| 1,538
| 3.984375
| 4
|
import collections
import heapq
rains = [1, 2, 0, 0, 2, 1]
# 0 1 2 3 4 5
rains = [10, 20, 20, 0, 20, 10]
# min heap to track the days when flooding would happen (if lake not dried)
nearest = []
# dict to store all rainy days
# use case: to push the subsequent rainy days into the heap for wet lakes
locs = collections.defaultdict(collections.deque)
# result - assume all days are rainy
res = [-1] * len(rains)
# pre-processing - {K: lake, V: list of rainy days}
for i, lake in enumerate(rains):
locs[lake].append(i)
for i, lake in enumerate(rains):
print "nearest wet day:", nearest
# check whether the day, i, is a flooded day
# the nearest lake got flooded (termination case)
if nearest and nearest[0] == i:
print []
exit()
# lake got wet
if lake != 0:
# pop the wet day. time complexity: O(1)
locs[lake].popleft()
# prioritize the next rainy day of this lake
if locs[lake]:
nxt = locs[lake][0]
heapq.heappush(nearest, nxt)
print "nearest wet day:", nearest
# a dry day
else:
# no wet lake, append an arbitrary value
if not nearest:
res[i] = 1
else:
# dry the lake that has the highest priority
# since that lake will be flooded in nearest future otherwise (greedy property)
next_wet_day = heapq.heappop(nearest)
wet_lake = rains[next_wet_day]
res[i] = wet_lake
print ""
print res
|
ef0440b8ce5c5303d75b1d297e323a1d8b92d619
|
AndreiBoris/sample-problems
|
/python/0200-numbers-of-islands/number-of-islands.py
| 5,325
| 3.84375
| 4
|
from typing import List
LAND = '1'
WATER = '0'
# TODO: Review a superior solutions
def overlaps(min1, max1, min2, max2):
overlap = max(0, min(max1, max2) - max(min1, min2))
if overlap > 0:
return True
if min1 == min2 or min1 == max2 or max1 == min2 or max1 == max2:
return True
if (min1 > min2 and max1 < max2) or (min2 > min1 and max2 < max1):
return True
return False
print(overlaps(0, 2, 1, 1))
# Definition for a Bucket.
class Bucket:
def __init__(self, identifiers: List[int]):
self.destination = None
self.identifiers = set(identifiers)
def hasDestination(self) -> bool:
return self.destination != None
def getDestination(self):
if not self.hasDestination():
return self
return self.destination.getDestination()
def combine(self, bucket):
otherDestination = bucket.getDestination()
thisDestination = self.getDestination()
uniqueIdentifiers = otherDestination.identifiers | thisDestination.identifiers
newBucket = Bucket(uniqueIdentifiers)
otherDestination.destination = newBucket
thisDestination.destination = newBucket
return newBucket
def contains(self, identifier: int) -> bool:
return identifier in self.getDestination().identifiers
class Solution:
'''
Given a 2d grid map of '1's (land) and '0's (water), count the number of islands.
An island is surrounded by water and is formed by connecting adjacent lands horizontally or vertically.
You may assume all four edges of the grid are all surrounded by water.
'''
def numIslands(self, grid: List[List[str]]) -> int:
if len(grid) < 1:
return 0
nextRowIsland = 1
rowIslands = {}
currentRowIslandStart = None
'''
Here we are generating row islands that we will then be pairing with adjacent row islands to form
groups that we will then combine into the true islands that are needed to get the correct answer
'''
for rowIndex, row in enumerate(grid):
lastSpot = WATER
lengthOfRow = len(row)
rowIslands[rowIndex] = []
for spotIndex, spot in enumerate(row):
if lastSpot == WATER and spot == LAND:
currentRowIslandStart = spotIndex
if spotIndex + 1 >= lengthOfRow and spot == LAND:
rowIslands[rowIndex].append((nextRowIsland, currentRowIslandStart, spotIndex))
nextRowIsland += 1
currentRowIslandStart = None
elif spot == WATER and currentRowIslandStart != None:
rowIslands[rowIndex].append((nextRowIsland, currentRowIslandStart, spotIndex - 1))
nextRowIsland += 1
if spot == WATER:
currentRowIslandStart = None
lastSpot = spot
nextGroup = 1
maxRowIndex = len(grid)
rowIslandsToGroups = {}
for rowNumber in [rowNumber for rowNumber in range(maxRowIndex)]:
for rowIslandNumber, startIndex, endIndex in rowIslands[rowNumber]:
rowIslandsToGroups[rowIslandNumber] = []
if rowNumber == 0:
rowIslandsToGroups[rowIslandNumber].append(nextGroup)
nextGroup += 1
continue
for prevRowIslandNumber, prevStartIndex, prevEndIndex in rowIslands[rowNumber - 1]:
if overlaps(prevStartIndex, prevEndIndex, startIndex, endIndex):
for groupNumber in rowIslandsToGroups[prevRowIslandNumber]:
rowIslandsToGroups[rowIslandNumber].append(groupNumber)
if len(rowIslandsToGroups[rowIslandNumber]) == 0:
rowIslandsToGroups[rowIslandNumber].append(nextGroup)
nextGroup += 1
groupBuckets = {}
allBuckets = []
for rowIslandNumber in range(1, nextRowIsland):
relatedGroups = rowIslandsToGroups[rowIslandNumber]
for group in relatedGroups:
if (groupBuckets.get(group, None)) == None:
newGroupBucket = Bucket([group])
groupBuckets[group] = newGroupBucket
allBuckets.append(newGroupBucket)
relatedBuckets = [groupBuckets[group] for group in relatedGroups]
firstBucket = relatedBuckets[0]
for group in relatedGroups:
if not firstBucket.contains(group):
newCombinedBucket = firstBucket.combine(groupBuckets[group])
allBuckets.append(newCombinedBucket)
return len([resultBucket for resultBucket in allBuckets if not resultBucket.hasDestination()])
solver = Solution()
# 1
# inputGrid = [
# '11110',
# '11010',
# '11000',
# '00000',
# ]
# 3
# inputGrid = [
# '11000',
# '11000',
# '00100',
# '00011',
# ]
# 1
# inputGrid = [
# '11011',
# '10001',
# '10001',
# '11111',
# ]
# 5
# inputGrid = [
# '101',
# '010',
# '101',
# ]
# 1
inputGrid = [
'111',
'010',
'010',
]
print(solver.numIslands(inputGrid))
|
227925521077e04140edcb13d50808695efd39a5
|
erikseulean/machine_learning
|
/python/linear_regression/multivariable.py
| 1,042
| 3.671875
| 4
|
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
iterations = 35
alpha = 0.1
def read_data():
data = np.loadtxt('data/housing_prices.in', delimiter=',')
X = data[:, [0,1]]
y = data[:, 2]
y.shape = (y.shape[0], 1)
return X, y
def normalize(X):
return (X - X.mean(0))/X.std(0)
def add_xzero(X):
return np.hstack((np.ones((X.shape[0],1)), X))
def gradient_descent(X, y):
theta = np.zeros((X.shape[1],1))
m = X.shape[0]
cost = []
for _ in range(iterations):
X_transpose = np.transpose(X)
cost_deriv = (alpha/m) * np.dot(X_transpose, np.dot(X, theta) - y)
theta = theta - cost_deriv
cost_func = np.sum(np.square(np.dot(X, theta) - y))/(2 * m)
cost.append(cost_func)
return theta, cost
def plot_cost_function(cost):
plt.plot(cost)
plt.xlabel("Iterations")
plt.ylabel("Cost function")
plt.show()
X, y = read_data()
X = add_xzero(normalize(X))
theta, cost = gradient_descent(X, y)
plot_cost_function(cost)
|
c7b567bde9e143c404c3670793576644a26f6142
|
AhmadQasim/Battleships-AI
|
/gym-battleship/gym_battleship/envs/battleship_env.py
| 4,760
| 3.53125
| 4
|
import gym
import numpy as np
from abc import ABC
from gym import spaces
from typing import Tuple
from copy import deepcopy
from collections import namedtuple
Ship = namedtuple('Ship', ['min_x', 'max_x', 'min_y', 'max_y'])
Action = namedtuple('Action', ['x', 'y'])
# Extension: Add info for when the ship is sunk
class BattleshipEnv(gym.Env, ABC):
def __init__(self, board_size: Tuple = None, ship_sizes: dict = None, episode_steps: int = 100):
self.ship_sizes = ship_sizes or {5: 1, 4: 1, 3: 2, 2: 1}
self.board_size = board_size or (10, 10)
self.board = None
self.board_generated = None
self.observation = None
self.done = None
self.step_count = None
self.episode_steps = episode_steps
self.action_space = spaces.Discrete(self.board_size[0] * self.board_size[1])
# MultiBinary is a binary space array
self.observation_space = spaces.MultiBinary([2, self.board_size[0], self.board_size[1]])
# dict to save all the ship objects
self.ship_dict = {}
def step(self, raw_action: int) -> Tuple[np.ndarray, int, bool, dict]:
assert (raw_action < self.board_size[0]*self.board_size[1]),\
"Invalid action (Superior than size_board[0]*size_board[1])"
action = Action(x=raw_action // self.board_size[0], y=raw_action % self.board_size[1])
self.step_count += 1
if self.step_count >= self.episode_steps:
self.done = True
# it looks if there is a ship on the current cell
# if there is a ship then the cell is 1 and 0 otherwise
if self.board[action.x, action.y] != 0:
# if the cell that we just hit is the last one from the respective ship
# then add this info to the observation
if self.board[self.board == self.board[action.x, action.y]].shape[0] == 1:
ship = self.ship_dict[self.board[action.x, action.y]]
self.observation[1, ship.min_x:ship.max_x, ship.min_y:ship.max_y] = 1
self.board[action.x, action.y] = 0
self.observation[0, action.x, action.y] = 1
# if the whole board is already filled, no ships
if not self.board.any():
self.done = True
return self.observation, 100, self.done, {}
return self.observation, 1, self.done, {}
# we end up here if we hit a cell that we had hit before already
elif self.observation[0, action.x, action.y] == 1 or self.observation[1, action.x, action.y] == 1:
return self.observation, -1, self.done, {}
# we end up here if we hit a cell that has not been hit before and doesn't contain a ship
else:
self.observation[1, action.x, action.y] = 1
return self.observation, 0, self.done, {}
def reset(self):
self.set_board()
# maintain an original copy of the board generated in the start
self.board_generated = deepcopy(self.board)
self.observation = np.zeros((2, *self.board_size), dtype=np.float32)
self.step_count = 0
return self.observation
def set_board(self):
self.board = np.zeros(self.board_size, dtype=np.float32)
k = 1
for i, (ship_size, ship_count) in enumerate(self.ship_sizes.items()):
for j in range(ship_count):
self.place_ship(ship_size, k)
k += 1
def place_ship(self, ship_size, ship_index):
can_place_ship = False
while not can_place_ship:
ship = self.get_ship(ship_size, self.board_size)
can_place_ship = self.is_place_empty(ship)
# set the ship cells to one
self.board[ship.min_x:ship.max_x, ship.min_y:ship.max_y] = ship_index
self.ship_dict.update({ship_index: ship})
@staticmethod
def get_ship(ship_size, board_size) -> Ship:
if np.random.choice(('Horizontal', 'Vertical')) == 'Horizontal':
# find the ship coordinates randomly
min_x = np.random.randint(0, board_size[0] - 1 - ship_size)
min_y = np.random.randint(0, board_size[1] - 1)
return Ship(min_x=min_x, max_x=min_x + ship_size, min_y=min_y, max_y=min_y + 1)
else:
min_x = np.random.randint(0, board_size[0] - 1)
min_y = np.random.randint(0, board_size[1] - 1 - ship_size)
return Ship(min_x=min_x, max_x=min_x + 1, min_y=min_y, max_y=min_y + ship_size)
def is_place_empty(self, ship):
# make sure that there are no ships by simply summing the cell values
return np.count_nonzero(self.board[ship.min_x:ship.max_x, ship.min_y:ship.max_y]) == 0
def get_board(self):
return self.board
|
d818d77f017fb908113dbdffbbaafa2b301d5999
|
dlaststark/machine-learning-projects
|
/Programming Language Detection/Experiment-2/Dataset/Train/Python/fibonacci-n-step-number-sequences-3.py
| 549
| 3.671875
| 4
|
from itertools import islice, cycle
def fiblike(tail):
for x in tail:
yield x
for i in cycle(xrange(len(tail))):
tail[i] = x = sum(tail)
yield x
fibo = fiblike([1, 1])
print list(islice(fibo, 10))
lucas = fiblike([2, 1])
print list(islice(lucas, 10))
suffixes = "fibo tribo tetra penta hexa hepta octo nona deca"
for n, name in zip(xrange(2, 11), suffixes.split()):
fib = fiblike([1] + [2 ** i for i in xrange(n - 1)])
items = list(islice(fib, 15))
print "n=%2i, %5snacci -> %s ..." % (n, name, items)
|
d6ddba1536a4377251089a3df2ad91fb87b987b8
|
jeremyyew/tech-prep-jeremy.io
|
/code/techniques/8-DFS/M341-flatten-nested-list-iterator.py
| 606
| 4
| 4
|
'''
- Only pop and unpack what is necessary.
- Pop and unpack when `hasNext` is called - it ensures there is a next available for `next`, if there really is a next.
- At the end only need to check if stack is nonempty - stack nonempty and last element not integer is not possible.
'''
class NestedIterator(object):
def __init__(self, nestedList):
self.stack = nestedList[::-1]
def next(self):
return self.stack.pop().getInteger()
def hasNext(self):
while self.stack and not self.stack[-1].isInteger():
nl = self.stack.pop()
self.stack.extend(nl.getList()[::-1])
return self.stack
|
eeb2068deeec87798355fe1bdd1e0f3508cbdcab
|
jiqin/leetcode
|
/codes/212.py
| 2,112
| 3.53125
| 4
|
class Solution(object):
def findWords(self, board, words):
"""
:type board: List[List[str]]
:type words: List[str]
:rtype: List[str]
"""
word_map = {}
for word in words:
for i in range(len(word)):
word_map[word[0:i+1]] = 0
for word in words:
word_map[word] = 1
# print word_map
results = []
for i in range(len(board)):
for j in range(len(board[0])):
tmp_word = ''
history_pos = []
heap = [(i, j, 0)]
while heap:
x, y, l = heap.pop()
if x < 0 or x >= len(board) or y < 0 or y >= len(board[0]):
continue
assert len(history_pos) >= l
history_pos = history_pos[0:l]
if (x, y) in history_pos:
continue
assert len(tmp_word) >= l
tmp_word = tmp_word[0:l] + board[x][y]
history_pos.append((x, y))
# print x, y, tmp_word, heap, history_pos
value = word_map.get(tmp_word)
if value is None:
continue
if value == 1:
results.append(tmp_word)
heap.append((x - 1, y, l + 1))
heap.append((x + 1, y, l + 1))
heap.append((x, y - 1, l + 1))
heap.append((x, y + 1, l + 1))
return list(set(results))
for b, w in (
# ([
# ['o', 'a', 'a', 'n'],
# ['e', 't', 'a', 'e'],
# ['i', 'h', 'k', 'r'],
# ['i', 'f', 'l', 'v']
# ],
# ["oath", "pea", "eat", "rain"]),
# (['ab', 'cd'], ['acdb']),
# (["ab","cd"], ["ab","cb","ad","bd","ac","ca","da","bc","db","adcb","dabc","abb","acb"]),
(["abc","aed","afg"], ["abcdefg","gfedcbaaa","eaabcdgfa","befa","dgc","ade"]),
):
print Solution().findWords(b, w)
|
5185d421330d59dc45577e6ed4e046a961461ae6
|
m-hawke/codeeval
|
/moderate/17_sum_of_integers.py
| 970
| 3.703125
| 4
|
import sys
for line in open(sys.argv[1]):
numbers = [int(x) for x in line.strip().split(',')]
max_ = sum(numbers)
for length in range(1, len(numbers), 2): # N.B increment by 2
sum_ = sum(numbers[:length])
max_ = sum_ if sum_ > max_ else max_
for i in range(len(numbers)-length):
# N.B. the following sum is also a sum of contiguous numbers
# for length + 1. We need calculate this once only, and
# therefore the length loop (see above) is incremented by 2
# each time. */
sum_ += numbers[i+length]
max_ = sum_ if sum_ > max_ else max_
sum_ -= numbers[i]
max_ = sum_ if sum_ > max_ else max_
print(max_)
#for line in open(sys.argv[1]):
# numbers = [int(x) for x in line.strip().split(',')]
# print(max([sum(numbers[x:x+i])
# for i in range(1,len(numbers)+1)
# for x in range(len(numbers)-i+1)]))
|
9d1ed42cd4a586df38bb7f2a122db69fbece314a
|
aidank18/PythonProjects
|
/montyhall/MontyHall.py
| 1,052
| 3.78125
| 4
|
from random import random
def game(stay):
doors = makeDoors()
choice = int(random() * 3)
for i in range(0, 2):
if (i != choice) and doors[i] == "g":
doors[i] = "r"
break
if stay:
return doors[choice]
else:
doors[choice] = "r"
for door in doors:
if door != "r":
return door
def makeDoors():
doors = ["g", "g", "c"]
for i in range(0, 10):
val1 = int(random() * 3)
val2 = int(random() * 3)
temp = doors[val1]
doors[val1] = doors[val2]
doors[val2] = temp
return doors
def tests(stay):
cars = 0
for i in range(0, 10000):
if game(stay) == "c":
cars += 1
probability = cars/10000
if stay:
print()
print("The probability of picking the car by staying with the same door is", probability)
print()
else:
print()
print("The probability of picking the car by switching doors is", probability)
print()
tests(False)
|
4b7ef56e7abace03cacb505daa6b34bdf90897b5
|
sandrahelicka88/codingchallenges
|
/EveryDayChallenge/happyNumber.py
| 796
| 4.09375
| 4
|
import unittest
'''Write an algorithm to determine if a number is "happy".
A happy number is a number defined by the following process: Starting with any positive integer, replace the number by the sum of the squares of its digits, and repeat the process until the number equals 1 (where it will stay), or it loops endlessly in a cycle which does not include 1. Those numbers for which this process ends in 1 are happy numbers.'''
def isHappy(n):
path = set()
while n not in path and n!=1:
path.add(n)
nextSum = 0
while n:
nextSum+=(n%10)**2
n = n//10
n = nextSum
return n==1
class Test(unittest.TestCase):
def test_happyNumber(self):
self.assertTrue(isHappy(19))
if __name__ == '__main__':
unittest.main()
|
35f9a1fc4c45112660f9cd871d1514b348990ddf
|
Jyun-Neng/LeetCode_Python
|
/103-binary-tree-zigzag-level-order.py
| 1,644
| 4.09375
| 4
|
"""
Given a binary tree, return the zigzag level order traversal of its nodes' values.
(ie, from left to right, then right to left for the next level and alternate between).
For example:
Given binary tree [3,9,20,null,null,15,7],
3
/ \
9 20
/ \
15 7
return its zigzag level order traversal as:
[
[3],
[20,9],
[15,7]
]
"""
import collections
# Definition for a binary tree node.
class TreeNode:
def __init__(self, x):
self.val = x
self.left = None
self.right = None
class Solution:
def zigzagLevelOrder(self, root):
"""
:type root: TreeNode
:rtype: List[List[int]]
"""
if not root:
return []
queue = collections.deque([])
queue.append(root)
reverse = False
res = []
# BFS
while queue:
size = len(queue)
nodes = [0 for i in range(size)]
for i in range(size):
node = queue.popleft()
idx = i if not reverse else size - 1 - i
nodes[idx] = node.val
if node.left:
queue.append(node.left)
if node.right:
queue.append(node.right)
reverse = not reverse
res.append(nodes)
return res
if __name__ == "__main__":
vals = [3, 9, 20, None, None, 15, 7]
root = TreeNode(vals[0])
node = root
node.left = TreeNode(vals[1])
node.right = TreeNode(vals[2])
node = node.right
node.left = TreeNode(vals[5])
node.right = TreeNode(vals[6])
print(Solution().zigzagLevelOrder(root))
|
eee031351c4115a1057b3b81293fe25a17ad8066
|
jrinder42/Advent-of-Code-2020
|
/day15/day15.py
| 767
| 3.5
| 4
|
'''
Advent of Code 2020 - Day 15
'''
lookup = {0: [1],
3: [2],
1: [3],
6: [4],
7: [5],
5: [6]}
turn = 7
prev = 5
while turn != 2020 + 1: # Part 1
#while turn != 30_000_000 + 1: # Part 2
if prev in lookup and len(lookup[prev]) == 1:
prev = 0
if prev in lookup:
lookup[prev].append(turn)
else:
lookup[prev] = [turn]
elif prev in lookup: # not unique
prev = lookup[prev][-1] - lookup[prev][-2] # most recent - second most recent
if prev in lookup:
lookup[prev].append(turn)
else:
lookup[prev] = [turn]
turn += 1
print('Advent of Code Day 15 Answer Part 1 / 2:', prev) # depends on while loop condition
|
0838fcd3cd4ab10f7dffb318fdf9534e7db0e079
|
rongduan-zhu/codejam2016
|
/1a/c/c.py
| 2,061
| 3.578125
| 4
|
#!/usr/bin/env python
import sys
class Node:
def __init__(self, outgoing, incoming):
self.outgoing = outgoing
self.incoming = incoming
def solve(fname):
with open(fname) as f:
tests = int(f.readline())
for i in xrange(tests):
f.readline()
deps = map(int, f.readline().split(' '))
two_node_cycles = []
nodes = {}
for j in xrange(1, len(deps) + 1):
# setup outgoing nodes
if j in nodes:
nodes[j].outgoing = deps[j - 1]
else:
nodes[j] = Node(deps[j - 1], [])
# setup incoming nodes
if deps[j - 1] in nodes:
nodes[deps[j - 1]].incoming.append(j)
else:
nodes[deps[j - 1]] = Node(None, incoming=[j])
# setup two node cycles
if nodes[j].outgoing in nodes and j == nodes[nodes[j].outgoing].outgoing:
two_node_cycles.append((j, nodes[j].outgoing))
print 'Case #{}: {}'.format(i + 1, traverse(nodes, two_node_cycles))
def traverse(nodes, two_node_cycles):
bff_cycle = 0
visited = {}
for n1, n2 in two_node_cycles:
visited[n1] = True
visited[n2] = True
bff_cycle += traverse_up(n1, nodes, visited, 1)
bff_cycle += traverse_up(n2, nodes, visited, 1)
for node in nodes:
if node not in visited:
visited_in_path = set()
visited_in_path.add(node)
start = node
current = nodes[start].outgoing
longest_cycle = 1
while current not in visited_in_path:
visited_in_path.add(current)
current = nodes[current].outgoing
longest_cycle += 1
if start == current and longest_cycle > bff_cycle:
bff_cycle = longest_cycle
return bff_cycle
def traverse_up(node, nodes, visited, length):
max_len = length
for up_node in nodes[node].incoming:
if up_node not in visited:
visited[up_node] = True
up_length = traverse_up(up_node, nodes, visited, length + 1)
max_len = up_length if up_length > max_len else max_len
return max_len
if __name__ == '__main__':
solve(sys.argv[1])
|
d6c1dd48d0c1b6bdc9c5bd7ebd936b30201112e5
|
febikamBU/string-matching
|
/bmh.py
| 2,182
| 4.03125
| 4
|
from collections import defaultdict
from sys import argv, exit
from comparer import Comparer
def precalc(pattern):
"""
Create the precalculation table: a dictionary of the number of characters
after the last occurrence of a given character. This provides the number of
characters to shift by in the case of a mismatch. Defaults to the length of
the string.
"""
table = defaultdict(lambda: len(pattern))
for i in range(len(pattern) - 1):
table[pattern[i]] = len(pattern) - i - 1
return table
def run_bmh(table, text, pattern, compare):
"""
Using the precalculated table, yield every match of the pattern in the
text, making comparisons with the provided compare function.
"""
# Currently attempted offset of the pattern in the text
skip = 0
# Keep going until the pattern overflows the text
while skip + len(pattern) <= len(text):
# Start matching from the end of the string
i = len(pattern) - 1
# Match each element in the pattern, from the end to the beginning
while i >= 0 and compare(text, skip+i, pattern, i):
i -= 1
# If the start of the string has been reached (and so every comparison
# was successful), then yield the position
if i < 0:
yield skip
# Shift by the precalculated offset given by the character in the text
# at the far right of the pattern, so that it lines up with an equal
# character in the pattern, if posssible. Otherwise the pattern is
# moved to after this position.
skip += table[text[skip + len(pattern) - 1]]
if __name__ == "__main__":
try:
pattern = argv[1]
text = argv[2]
except IndexError:
print("usage: python3 bmh.py PATTERN TEXT")
exit()
print(f'Searching for "{pattern}" in "{text}".')
print()
compare = Comparer()
table = precalc(pattern)
print(f'Precomputed shift table: {dict(table)}')
print()
for match in run_bmh(table, text, pattern, compare):
print(f"Match found at position {match}")
print(f"{compare.count} comparisons")
|
ed83eebfc06efbb76af1baf6f9996f4389556824
|
jhgdike/leetCode
|
/leetcode_python/1-100/43.py
| 686
| 3.578125
| 4
|
class Solution(object):
"""
Python can do it directly by str(int(num1)*int(num2))
"""
def multiply(self, num1, num2):
"""
:type num1: str
:type num2: str
:rtype: str
"""
res = [0] * (len(num1) + len(num2))
lengh = len(res)
for i, n1 in enumerate(reversed(num1)):
for j, n2 in enumerate(reversed(num2)):
res[i + j] += int(n1) * int(n2)
res[i + j + 1] += res[i + j] / 10
res[i + j] %= 10
pt = lengh
while pt > 0 and res[pt - 1] == 0:
pt -= 1
res = res[:pt]
return ''.join(map(str, res[::-1] or [0]))
|
db6be99a0fb8e2a18b0470e53a21557ef47a026a
|
akaliutau/cs-problems-python
|
/problems/dp/Solution32.py
| 426
| 3.875
| 4
|
"""
Given a string containing just the characters '(' and ')', find the length of
the longest valid (well-formed) parentheses substring.
Example 1:
Input: s = "(()" Output: 2 Explanation: The longest valid parentheses
substring is "()". Example 2:
Input: s = ")()())" Output: 4 Explanation: The longest valid parentheses
substring is "()()".
"""
class Solution32:
pass
|
f1d2f920c551650c468ce2db140b2a45bd44bbf7
|
DayGitH/Python-Challenges
|
/DailyProgrammer/DP20141231B.py
| 1,290
| 4.09375
| 4
|
"""
[2014-12-31] Challenge #195 [Intermediate] Math Dice
https://www.reddit.com/r/dailyprogrammer/comments/2qxrtk/20141231_challenge_195_intermediate_math_dice/
#Description:
Math Dice is a game where you use dice and number combinations to score. It's a neat way for kids to get mathematical
dexterity. In the game, you first roll the 12-sided Target Die to get your target number, then roll the five 6-sided
Scoring Dice. Using addition and/or subtraction, combine the Scoring Dice to match the target number. The number of
dice you used to achieve the target number is your score for that round. For more information, see the product page for
the game: (http://www.thinkfun.com/mathdice)
#Input:
You'll be given the dimensions of the dice as NdX where N is the number of dice to roll and X is the size of the dice.
In standard Math Dice Jr you have 1d12 and 5d6.
#Output:
You should emit the dice you rolled and then the equation with the dice combined. E.g.
9, 1 3 1 3 5
3 + 3 + 5 - 1 - 1 = 9
#Challenge Inputs:
1d12 5d6
1d20 10d6
1d100 50d6
#Challenge Credit:
Thanks to /u/jnazario for his idea -- posted in /r/dailyprogrammer_ideas
#New year:
Happy New Year to everyone!! Welcome to Y2k+15
"""
def main():
pass
if __name__ == "__main__":
main()
|
4b2eb7f54b2898ce241dddfc0cc7966971ac4589
|
Keshav1506/competitive_programming
|
/Hashing/006_geeksforgeeks_Swapping_Pairs_Make_Sum_Equal/Solution.py
| 3,297
| 4.03125
| 4
|
#
# Time : O(N^3); Space: O(N)
# @tag : Hashing
# @by : Shaikat Majumdar
# @date: Aug 27, 2020
# **************************************************************************
# GeeksForGeeks - Swapping pairs make sum equal
#
# Description:
#
# Given two arrays of integers A[] and B[] of size N and M, the task is to check if a pair of values (one value from each array) exists such that swapping the elements of the pair will make the sum of two arrays equal.
#
# Example 1:
#
# Input: N = 6, M = 4
# A[] = {4, 1, 2, 1, 1, 2}
# B[] = (3, 6, 3, 3)
#
# Output: 1
# Explanation: Sum of elements in A[] = 11
# Sum of elements in B[] = 15, To get same
# sum from both arrays, we can swap following
# values: 1 from A[] and 3 from B[]
#
# Example 2:
#
# Input: N = 4, M = 4
# A[] = {5, 7, 4, 6}
# B[] = {1, 2, 3, 8}
#
# Output: 1
# Explanation: We can swap 6 from array
# A[] and 2 from array B[]
#
# Your Task:
# This is a function problem. You don't need to take any input, as it is already accomplished by the driver code.
# You just need to complete the function findSwapValues() that takes array A, array B, integer N, and integer M
# as parameters and returns 1 if there exists any such pair otherwise returns -1.
#
# Expected Time Complexity: O(MlogM+NlogN).
# Expected Auxiliary Space: O(1).
#
# **************************************************************************
# Source: https://practice.geeksforgeeks.org/problems/swapping-pairs-make-sum-equal4142/1 (GeeksForGeeks - Swapping pairs make sum equal)
#
# **************************************************************************
# Solution Explanation
# **************************************************************************
# Refer to Solution_Explanation.md.
#
import unittest
class Solution:
# Returns sum of elements in list
def getSum(self, X):
sum = 0
for i in X:
sum += i
return sum
# Finds value of
# a - b = (sumA - sumB) / 2
def getTarget(self, A, B):
# Calculations of sumd from both lists
sum1 = self.getSum(A)
sum2 = self.getSum(B)
# Because that target must be an integer
if (sum1 - sum2) % 2 != 0:
return 0
return (sum1 - sum2) / 2
def findSwapValues(self, A, B):
# Call for sorting the lists
A.sort()
B.sort()
# Note that target can be negative
target = self.getTarget(A, B)
# target 0 means, answer is not possible
if target == 0:
return False
i, j = 0, 0
while i < len(A) and j < len(B):
diff = A[i] - B[j]
if diff == target:
return True
# Look for a greater value in list A
elif diff < target:
i += 1
# Look for a greater value in list B
else:
j += 1
class Test(unittest.TestCase):
def setUp(self) -> None:
pass
def tearDown(self) -> None:
pass
def test_fourSum(self) -> None:
sol = Solution()
for A, B, solution in (
[[4, 1, 2, 1, 1, 2], [3, 6, 3, 3], True],
[[5, 7, 4, 6], [1, 2, 3, 8], True],
):
self.assertEqual(solution, sol.findSwapValues(A, B))
if __name__ == "__main__":
unittest.main()
|
545049864c75b1045885c8eddba22cc60de252d9
|
betty29/code-1
|
/recipes/Python/577289_Maclaurinsseriestan1/recipe-577289.py
| 1,615
| 4.0625
| 4
|
#On the name of ALLAH and may the blessing and peace of Allah
#be upon the Messenger of Allah Mohamed Salla Allahu Aliahi Wassalam.
#Author : Fouad Teniou
#Date : 06/07/10
#version :2.6
"""
maclaurin_tan-1 is a function to compute tan-1(x) using maclaurin series
and the interval of convergence is -1 <= x <= +1
sin(x) = x - x^3/3 + x^5/5 - x^7/7 ...........
"""
from math import *
def error(number):
""" Raises interval of convergence error."""
if number > 1 or number < -1 :
raise TypeError,\
"\n<The interval of convergence should be -1 <= value <= 1 \n"
def maclaurin_cot(value, k):
"""
Compute maclaurin's series approximation for tan-1(x)
"""
global first_value
first_value = 0.0
#attempt to Approximate tan-1(x) for a given value
try:
error(value)
for item in xrange(1,k,4):
next_value = value**item/float(item)
first_value += next_value
for arg in range(3,k,4):
next_value = -1* value **arg/float(arg)
first_value += next_value
return round(first_value*180/pi,2)
#Raise TypeError if input is not within
#the interval of convergence
except TypeError,exception:
print exception
if __name__ == "__main__":
maclaurin_cot1 = maclaurin_cot(0.305730681,100)
print maclaurin_cot1
maclaurin_cot2 = maclaurin_cot(0.75355405,100)
print maclaurin_cot2
maclaurin_cot3 = maclaurin_cot(0.577350269,100)
print maclaurin_cot3
#################################################################
#"C:\python
#17.0
#37.0
#30.0
|
c56d992234d558fd0b0b49aa6029d6d287e90f2a
|
ARSimmons/IntroToPython
|
/Students/Dave Fugelso/Session 2/ack.py
| 2,766
| 4.25
| 4
|
'''
Dave Fugelso Python Course homework Session 2 Oct. 9
The Ackermann function, A(m, n), is defined:
A(m, n) =
n+1 if m = 0
A(m-1, 1) if m > 0 and n = 0
A(m-1, A(m, n-1)) if m > 0 and n > 0.
See http://en.wikipedia.org/wiki/Ackermann_funciton
Create a new module called ack.py in a session02 folder in your student folder.
In that module, write a function named ack that performs Ackermann's function.
Write a good docstring for your function according to PEP 257.
Ackermanns function is not defined for input values less than 0. Validate inputs to your function and return None if they are negative.
The wikipedia page provides a table of output values for inputs between 0 and 4. Using this table, add a if __name__ == "__main__": block to test your function.
Test each pair of inputs between 0 and 4 and assert that the result produced by your function is the result expected by the wikipedia table.
When your module is run from the command line, these tests should be executed. If they all pass,
print All Tests Pass as the result.
Add your new module to your git clone and commit frequently while working on your implementation. Include good commit messages that explain concisely both what you are doing and why.
When you are finished, push your changes to your fork of the class repository in GitHub. Then make a pull request and submit your assignment in Canvas.
'''
#Ackermann function
def ack(m, n):
'''
Calculate the value for Ackermann's function for m, n.
'''
if m < 0 or n < 0: return None
if m == 0: return n+1
if n == 0: return ack(m-1, 1)
return ack (m-1, ack (m, n-1))
class someClass (object):
def __init__(self):
self.setBody('there')
def afunc (self, a):
print a, self.getBody()
def getBody(self):
return self.__body
def setBody(self, value):
self.__body = value
body = property(getBody, setBody, None, "Body property.")
if __name__ == "__main__":
'''
Unit test for Ackermann function. Print table m = 0,4 and n = 0,4.
'''
#Print nicely
print 'm/n\t\t',
for n in range(0,5):
print n, '\t',
print '\n'
for m in range (0,4):
print m,'\t',
for n in range(0,5):
print '\t',
print ack(m, n),
print
# for the m = 4 row, just print the first one (n = 0) otherwise we hit a stack overflow (maximum resursion)
m = 4
print m,'\t',
for n in range(0,1):
print '\t',
print ack(m, n),
print '\t-\t-\t-\t-'
print 'All Tests Pass'
s = someClass ()
s.afunc('hello')
s.body = 'fuck ya!'
s.afunc('hello')
s.body = 'why not?'
|
2b6b64ed41e1ed99a4e8a12e1e6ab53e6a9596ef
|
AusCommsteam/Algorithm-and-Data-Structures-and-Coding-Challenges
|
/Challenges/shortestWayToFormString.py
| 7,619
| 3.78125
| 4
|
"""
Shortest Way to Form String
From any string, we can form a subsequence of that string by deleting some number of characters (possibly no deletions).
Given two strings source and target, return the minimum number of subsequences of source such that their concatenation equals target. If the task is impossible, return -1.
"""
"""
Binary Search
Create mapping from each source char to the indices in source of that char.
Iterate over target, searching for the next index in source of each char. Return -1 if not found.
Search is by binary search of the list of indices in source of char.
If the next index in source requires wrapping around to the start of source, increment result count.
Time: O(n log m) for source of length m and target of length n.
Space: O(m)
The idea is to create an inverted index that saves the offsets of where each character occurs in source. The index data structure is represented as a hashmap, where the Key is the character, and the Value is the (sorted) list of offsets where this character appears. To run the algorithm, for each character in target, use the index to get the list of possible offsets for this character. Then search this list for next offset which appears after the offset of the previous character. We can use binary search to efficiently search for the next offset in our index.
Example with source = "abcab", target = "aabbaac"
The inverted index data structure for this example would be:
inverted_index = {
a: [0, 3] # 'a' appears at index 0, 3 in source
b: [1, 4], # 'b' appears at index 1, 4 in source
c: [2], # 'c' appears at index 2 in source
}
Initialize i = -1 (i represents the smallest valid next offset) and loop_cnt = 1 (number of passes through source).
Iterate through the target string "aabbaac"
a => get the offsets of character 'a' which is [0, 3]. Set i to 1.
a => get the offsets of character 'a' which is [0, 3]. Set i to 4.
b => get the offsets of character 'b' which is [1, 4]. Set i to 5.
b => get the offsets of character 'b' which is [1, 4]. Increment loop_cnt to 2, and Set i to 2.
a => get the offsets of character 'a' which is [0, 3]. Set i to 4.
a => get the offsets of character 'a' which is [0, 3]. Increment loop_cnt to 3, and Set i to 1.
c => get the offsets of character 'c' which is [2]. Set i to 3.
We're done iterating through target so return the number of loops (3).
The runtime is O(M) to build the index, and O(logM) for each query. There are N queries, so the total runtime is O(M + N*logM). M is the length of source and N is the length of target. The space complexity is O(M), which is the space needed to store the index.
"""
class Solution:
def shortestWay(self, source: str, target: str) -> int:
index = collections.defaultdict(list)
for i, s in enumerate(source):
index[s].append(i)
res = 0
i = 0 # next index of source to check
for t in target:
if t not in index:
return -1 # cannot make target if char not in source
indices = index[t]
j = bisect.bisect_left(indices, i)
if j == len(indices): # index in char_indices[c] that is >= i
res += 1 # wrap around to beginning of source
j = 0
i = indices[j] + 1 # next index in source
return res if i == 0 else res + 1 # add 1 for partial source
def shortestWay(self, source: str, target: str) -> int:
inverted_index = collections.defaultdict(list)
for i, ch in enumerate(source):
inverted_index[ch].append(i)
loop_cnt = 1
i = -1
for ch in target:
if ch not in inverted_index:
return -1
offset_list_for_ch = inverted_index[ch]
# bisect_left(A, x) returns the smallest index j s.t. A[j] >= x. If no such index j exists, it returns len(A).
j = bisect.bisect_left(offset_list_for_ch, i)
if j == len(offset_list_for_ch):
loop_cnt += 1
i = offset_list_for_ch[0] + 1
else:
i = offset_list_for_ch[j] + 1
return loop_cnt
"""
DP
The main idea behind this code is also to build up an inverted index data structure for the source string and then to greedily use characters from source to build up the target. In this code, it's the dict array. Each character is mapped to an index where it is found at in source. In this code, dict[i][c - 'a'] represents the earliest index >= i where character c occurs in source.
For example, if source = "xyzy", then dict[0]['y' - 'a'] = 1 but dict[2]['y'-'a'] = 3.
Also a value of -1, means that there are no occurrences of character c after the index i.
So, after this inverted data structure is built (which took O(|Σ|*M) time). We iterate through the characters of our target String. The idxOfS represents the current index we are at in source.
For each character c in target, we look for the earliest occurrence of c in source using dict via dict[idxOfS][c - 'a']. If this is -1, then we have not found any other occurrences and hence we need to use a new subsequence of S.
Otherwise, we update idxOfS to be dict[idxOfS][c - 'a'] + 1 since we can only choose characters of source that occur after this character if we wish to use the same current subsequence to build the target.
dict[idxOfS][c-'a'] = N - 1 is used as a marker value to represent that we have finished consuming the entire source and hence need to use a new subsequence to continue.
(I would highly recommend reading @Twohu's examples of how to use the inverted index data structure to greedily build target using the indexes. They go into much more detail).
At the end, the check for (idxOfS == 0? 0 : 1) represents whether or not we were in the middle of matching another subsequence. If we were in the middle of matching it, then we would need an extra subsequence count of 1 since it was never accounted for.
"""
class Solution:
def shortestWay(self, source: str, target: str) -> int:
if len(set(target) - set(source)) > 0:
return -1
m = len(source)
move = [[-1]*26 for _ in range(m)]
move[0] = [source.find(chr(c)) + 1 for c in range(ord('a'), ord('a') + 26)]
for i in range(-1, -m, -1):
move[i] = list(map(lambda x: x+1, move[i+1]))
move[i][ord(source[i]) - 97] = 1
i = 0
for c in target:
i += move[i%m][ord(c)-ord('a')]
return i//m + (i%m > 0)
"""
Greedy
Time: O(MN)
"""
class Solution(object):
def shortestWay(self, source, target):
def match(st):#match source from st index of target
idx=0#idx of source
while idx<len(source) and st<n:
if source[idx]==target[st]:
st+=1
idx+=1
else:
idx+=1
return st
n=len(target)
source_set=set(source)
for ch in target:
if ch not in source_set:
return -1
#match one by one,match string until cannot match anymore.
st=0
count=0
while st<n:
st=match(st)
count+=1
return count
class Solution:
def shortestWay(self, source: str, target: str) -> int:
def inc():
self.cnt += 1
return 0
self.cnt = i = 0
for t in target:
i = source.find(t, i) + 1 or source.find(t, inc()) + 1
if not i:
return -1
return self.cnt + 1
|
6c8e13d208beafae5b669f07b0dadd18d1c6a2b4
|
dltech-xyz/Alg_Py_Xiangjie
|
/第5章/huo.py
| 1,953
| 3.921875
| 4
|
class Node(object):
def __init__(self, value, left=None, right=None):
self.value = value
self.left = None
self.right = None
class Huffman(object):
def __init__(self, items=[]):
while len(items)!=1:
a, b = items[0], items[1]
newvalue = a.value + b.value
newnode = Node(value=newvalue)
newnode.left, newnode.right = a, b
items.remove(a)
items.remove(b)
items.append(newnode)
items = sorted(items, key=lambda node: int(node.value))
# 每次都要记得更新新的霍夫曼树的根节点
self.root = newnode
def print(self):
queue = [self.root]
while queue:
current = queue.pop(0)
print(current.value, end='\t')
if(current.left):
queue.append(current.left)
if current.right:
queue.append(current.right)
print()
def sortlists(lists):
return sorted(lists, key=lambda node: int(node.value))
def create_huffman_tree(lists):
while len(lists)>1:
a, b = lists[0], lists[1]
node = Node(value=int(a.value+b.value))
node.left, node.right = a, b
lists.remove(a)
lists.remove(b)
lists.append(node)
lists = sorted(lists, key=lambda node: node.value)
return lists
def scan(root):
if root:
queue = [root]
while queue:
current = queue.pop(0)
print(current.value, end='\t')
if current.left:
queue.append(current.left)
if current.right:
queue.append(current.right)
if __name__ == '__main__':
ls = [Node(i) for i in range(1, 5)]
huffman = Huffman(items=ls)
huffman.print()
print('===================================')
lssl = [Node(i) for i in range(1, 5)]
root = create_huffman_tree(lssl)[0]
scan(root)
|
5b5b2327e84313fca65952be1f103454b6f63797
|
annatjohansson/complex_dynamics
|
/Python scripts/M_DEM.py
| 2,566
| 3.8125
| 4
|
def M_Dist(cx, cy, max_it, R):
"""Computes the distance of a point z = x + iy from the Mandelbrot set """
"""Inputs:
c = cx + cy: translation
max_it: maximum number of iterations
R: escape radius (squared)"""
x = 0.0
y = 0.0
x2 = 0.0
y2 = 0.0
dist = 0.0
it = 0
# List to store the orbit of the origin
X = [0]*(max_it + 1)
Y = [0]*(max_it + 1)
# Iterate p until orbit exceeds escape radius or max no. of iterations is reached
while (it < max_it) and (x2 + y2 < R):
temp = x2 - y2 + cx
y = 2*x*y + cy
x = temp
x2 = x*x
y2 = y*y
# Store the orbit
X[it] = x
Y[it] = y
it = it + 1
# If the escape radius is exceeded, calculate the distance from M
if (x2 + y2 > R):
x_der = 0.0
y_der = 0.0
i = 0
flag = False
# Approximate the derivative
while (i < it) and (flag == False):
temp = 2*(X[i]*x_der - Y[i]*y_der)+1
y_der = 2*(Y[i]*x_der + X[i]*y_der)
x_der = temp
flag = max(abs(x_der),abs(y_der)) > (2 ** 31 - 1)
i = i+1
if (flag == False):
dist = np.log(x2 + y2)*np.sqrt(x2 + y2)/np.sqrt(x_der*x_der + y_der*y_der)
return dist
def M_DEM(M, nx, ny, x_min, x_max, y_min, y_max, max_it, R, threshold):
"""Computes an approximation of the Mandelbrot set via the distance estimation method"""
"""Inputs:
M: an output array of size nx*ny
nx, ny: the image resolution in the x- and y direction
x_min, x_max: the limits of the x-axis in the region
y_min, y_max: the limits of the y-axis in the region
max_it: the maximum number of iterations
R: escape radius (squared)
threshold: critical distance from the Mandelbrot set (in pixel units)"""
# Calculate the threshold in terms of distance in the complex plane
delta = threshold*(x_max-x_min)/(nx-1)
# For each pixel in the nx*ny grid, calculate the distance of the point
for iy in range(0, ny):
cy = y_min + iy*(y_max - y_min)/(ny - 1)
for ix in range(0, nx):
cx = x_min + ix*(x_max - x_min)/(nx - 1)
#Determine whether distance is smaller than critical distance
dist = M_Dist(cx, cy, max_it, R)
if dist < delta:
M[ix][iy] = 1
else:
M[ix][iy] = 0
return M
|
5dda83f2be9b2a8a87c459d3ba1dfe867633e9a2
|
dexterchan/DailyChallenge
|
/MAR2020/PhoneNumbers.py
| 2,568
| 3.9375
| 4
|
#Skill: Tries
#Difficulty : EASY
#Given a phone number, return all valid words that can be created using that phone number.
#For instance, given the phone number 364
#we can construct the words ['dog', 'fog'].
#Here's a starting point:
#Analysis
#If done by brutal force, time cost is exponent to find all possibilties
#To reduce to linear, we can use data structure Tries
#Create a Tries from valid words... using digit sequence.... time cost O[N] -> linear
#To search for word with number, it cost O[N]
#space complexity is Linear O(N)
from typing import List
lettersMaps = {
1: [],
2: ['a', 'b', 'c'],
3: ['d', 'e', 'f'],
4: ['g', 'h', 'i'],
5: ['j', 'k', 'l'],
6: ['m', 'n', 'o'],
7: ['p', 'q', 'r', 's'],
8: ['t', 'u', 'v'],
9: ['w', 'x', 'y', 'z'],
0: []
}
class Tries():
def __init__(self, isWord=False):
self.digits = [None]*10
self.isWord = isWord
self.bagOfWords = []
def insertWord(self, word):
self.isWord = True
self.bagOfWords.append(word)
def get(self, digit):
return self.digits[digit]
def assign(self, digit):
self.digits[digit] = Tries()
validWords = ['dog', 'fish', 'cat', 'fog']
class PhoneNumbers():
def __init__(self):
self.tries = Tries()
def constructTries(self, validWords:List[str]):
for w in validWords:
tries = self.tries
cnt = 0
maxLen = len(w)
for ch in w:
d = self.__mapChToNumber(ch)
if d is None:
raise Exception("not found character to map digit:"+ch)
if tries.get(d) is None:
tries.assign(d)
tries = tries.get(d)
cnt = cnt + 1
if cnt == maxLen:
tries.insertWord(w)
def __mapChToNumber(self, ch):
for (d, l) in lettersMaps.items():
if ch in l:
return d
return None
def getWords(self, phoneNumbers:str):
tries = self.tries
result = []
for d in phoneNumbers:
tries = tries.get(int(d))
if tries is None:
return result
result = tries.bagOfWords
return result
phoneNumbers = PhoneNumbers()
phoneNumbers.constructTries(validWords)
def makeWords(phone):
#Fill this in
phoneNumbersRef = phoneNumbers
return phoneNumbers.getWords(phone)
if __name__ == "__main__":
print(makeWords('364'))
# ['dog', 'fog']
print(makeWords('3474'))
|
2e027c271f142affced4a4f873058702cea3487b
|
Leahxuliu/Data-Structure-And-Algorithm
|
/Python/Binary Search Tree/669.Trim a Binary Search Tree.py
| 1,471
| 4.09375
| 4
|
# !/usr/bin/python
# -*- coding: utf-8 -*-
# @Time : 2020/03/30
# @Author : XU Liu
# @FileName: 669.Trim a Binary Search Tree.py
'''
1. 题目类型:
BST
2. 题目要求与理解:
Trim a Binary Search Tree(修剪树)
给定一个二叉搜索树,同时给定最小边界 L 和最大边界 R。通过修剪二叉搜索树,使得所有节点的值在 [L, R] 中 (R>=L)
3. 解题思路:
对比R, L, root之间的大小关系,类似二分法里找一定范围内的值
用recursion
a. end: root is None
b. R < root.val --> 留下left subtree, 缩小范围
c. l > root.val --> 留下right subtree, 缩小范围
d. L <= root val and R >= root.val --> 留下both sides of the tree
4. 输出输入以及边界条件:
input: root: TreeNode, L: int, R: int
output: TreeNode
corner case: None
5. 空间时间复杂度
'''
# Definition for a binary tree node.
class TreeNode:
def __init__(self, x):
self.val = x
self.left = None
self.right = None
class Solution:
def trimBST(self, root, L, R):
if root == None:
return None
if R < root.val:
return self.trimBST(root.left, L, R)
if L > root.val:
return self.trimBST(root.right, L, R)
if L <= root.val and R >= root.val:
root.left = self.trimBST(root.left, L, R)
root.right = self.trimBST(root.right, L, R)
return root
|
304f5e570b6810e9a79473cfa6b1fbda91a02a90
|
boop34/adventofcode-2020
|
/day_13/day_13.py
| 1,810
| 3.609375
| 4
|
#!/usr/bin/env python3
# fetch the input
with open('input.txt', 'r') as f:
# initialize the departure time
d_time = int(f.readline().strip())
# get the bus information
bus_ids = f.readline().strip().split(',')
# include the 'x' for the second part of the puzzle
bus_ids = list((int (i) if i != 'x' else -1) for i in bus_ids)
# bus_ids = list(map(int, (filter(lambda x: x != 'x', bus_ids))))
def solve1(d_time, bus_ids):
# loop untill we find the perfect bus
while True:
# check if the current departure time can be fulfilled by buses
for bus in bus_ids:
# if -1 then skip
if bus == -1:
continue
# chec if the current departure time is divisable by the bus id
if d_time % bus == 0:
return d_time, bus
# otherwise increment the d_time
d_time += 1
# ideally the control should never get here
return None
# for the first puzzle
d_time_, bus_id = solve1(d_time, bus_ids)
print(bus_id * (d_time_ - d_time))
# for the second part we have to implement Chinese Remainder Theorem
# https://en.wikipedia.org/wiki/Chinese_remainder_theorem
# initialize a list to store the remainder and moduli tuple
l = []
# populate the list
for i, r in enumerate(bus_ids):
# if -1 then skip
if r == -1:
continue
# otherwise valid bus id
else:
l.append((r, (r - i) % r))
# store the first moduli and the required value
n, x = l[0]
# https://en.wikipedia.org/wiki/Chinese_remainder_theorem#Search_by_sieving
# https://github.com/woj76/adventofcode2020/blob/main/src/day13.py
# iterate over the the list
for n_, a in l[1:]:
while True:
x += n
if x % n_ == a:
break
n *= n_
# for the second puzzle
print(x)
|
16c125fbe1b7e1dfeba8515f38a5e88cf73e5380
|
sanket-qp/IK
|
/16-Strings/longest_repeating_substring.py
| 6,042
| 4.125
| 4
|
"""
Longest repeating substring
Approaches:
(1) Brute force: Generate all substrings and count their occurrence
Time Complexity: O(N^4) = O(N^2) for generating all substrings
+ O(N^2) for num_occurrence (string compare also takes O(N))
Space Complexity: O(1)
(2) Using radix tree:
We'll add all the suffixes of a given string in to a radix tree.
Then we'll find the node with most termination character ($) in it's subtree.
A node in the radix tree represents a prefix and all the children represents suffixes
whose prefix is a current node.
That means that the node which has most termination characters is a prefix of more than one suffixes
We need to find such node which has most $ under it.
so, to find most repeating substring, we'll just find a node with maximum $ in it'subtree.
--------
Now, to find Longest repeating substring, we'll find a node which is farthest (i.e. longest) from the root and has
multiple $s in it's subtree
Example:
banana:
ana is a prefix of anana and ana
ana is the Longest repeating substring
mississippi:
issi is prefix of issippi and issiissippi
issi is the Longest repeating substring
"""
from radix_tree import RadixTree
def all_substrings(s):
for i in range(len(s)):
for j in range(i + 1, len(s) + 1):
yield s[i:j]
def num_occurrence(s, substr):
n = 0
for idx in range(len(s) - len(substr) + 1):
temp = s[idx:idx + len(substr)]
# print temp
if temp == substr:
n += 1
return n
def longest_repeating_substring_brute_force(s):
max_len = 1
longest_so_far = s[0]
most_occurred = 1
for substr in all_substrings(s):
n = num_occurrence(s, substr)
if len(substr) > max_len and n >= most_occurred and n > 1:
max_len = max(max_len, len(substr))
most_occurred = n
longest_so_far = substr
return longest_so_far
def all_suffixes(s):
for i in range(len(s) - 1, -1, -1):
yield s[i:]
def xnode_with_max_termination_chars(root):
def get_node(node):
if not node:
return (None, 0, 0)
if node.is_leaf():
return (node, 1, 1)
_max = 0
chosen_node = None
_sum = 0
for child in node.children:
n, num, sum_child = get_node(child)
_sum += num
if sum_child > _max:
_max = sum_child
chosen_node = child
print "chosen: %s, sum: %s" % (chosen_node, sum_child)
print "child: %s, total: %s" % (child, _sum)
return chosen_node, _max, _sum
node, _max, _sum = get_node(root)
print "node: %s, max: %s, sum: %s" % (node, _max, _sum)
def node_with_max_termination_chars(root):
def get_max(node):
if not node:
return 0
if node.is_leaf():
return 1
_sum = 0
for child in node.children:
_sum += get_max(child)
return _sum
max_repeating_node = None
max_occurrence = 0
for child in root.children:
temp = get_max(child)
if temp > max_occurrence:
max_repeating_node = child
max_occurrence = temp
return max_repeating_node, max_occurrence
def most_repeating_substring_using_radix_tree(s):
tree = RadixTree()
for suffix in all_suffixes(s):
tree.add_word(suffix)
tree.level_order()
return node_with_max_termination_chars(tree.root)
def longest_node_with_max_termination_chars(root):
def get_longest(node):
if not node:
return 0, None
if node.is_leaf():
return 1, node.key
total = 0
longest_so_far = ""
max_dollars = 0
for child in node.children:
num_dollars, longest = get_longest(child)
total += num_dollars
# find the longest and most repeating
if num_dollars > max_dollars:
longest_so_far = longest
max_dollars = num_dollars
longest_so_far = node.key + longest_so_far
return total, longest_so_far
_max = 0
longest_repeating = None
for child in root.children:
total, longest = get_longest(child)
print "%s: %s, total: %s" % (child.key, longest, total)
if total > _max:
_max = total
longest_repeating = longest
return longest_repeating[:-1]
def longest_repeating_substring_using_radix_tree(s):
"""
root asks each of the children that give me the number of $ in your subtree and which one is the longest
"""
tree = RadixTree()
for suffix in all_suffixes(s):
tree.add_word(suffix)
longest = longest_node_with_max_termination_chars(tree.root)
# print "longest: %s" % longest
return longest
def main():
assert 2 == num_occurrence("banana", "ana")
assert 3 == num_occurrence("banana", "a")
assert 2 == num_occurrence("banana", "na")
assert 2 == num_occurrence("banana", "an")
assert 1 == num_occurrence("banana", "b")
assert 0 == num_occurrence("banana", "xyz")
assert "ana" == longest_repeating_substring_brute_force("banana")
assert "a" == longest_repeating_substring_brute_force("abcdef")
node, total = most_repeating_substring_using_radix_tree("banana")
assert "a" == node.key
assert 3 == total
node, total = most_repeating_substring_using_radix_tree("mississippi")
assert "i" == node.key
assert 4 == total
assert "ana" == longest_repeating_substring_using_radix_tree("banana")
assert "issi" == longest_repeating_substring_using_radix_tree("mississippi")
assert "aaa" == longest_repeating_substring_using_radix_tree("aaaa")
if __name__ == '__main__':
main()
|
0bb7b65666ac17d8207a34eea9141ea700d5aa46
|
N11K6/Digi_FX
|
/Distortion/Valve.py
| 1,552
| 3.5
| 4
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
This function applies distortion to a given audio signal by modelling the
effects from a vacuum tube. Input parameters are the signal vector x,
pre-gain G, "work point" Q, amount of distortion D, and pole positions r1
and r2 for the two filters used.
@author: nk
"""
import numpy as np
from scipy import signal
from scipy.io import wavfile
#%%
def Valve(x,G=1,Q=-0.05,D=400,r1=0.97,r2=0.8):
# Normalize input:
x = x / (np.max(np.abs(x)))
# Apply pre-gain:
x *= G
# Oversampling:
x_over = signal.resample(x, 8 * len(x))
if Q == 0:
y_over = x_over/(1-np.exp(-D * x_over)) - 1 / D
else:
# Apply Distortion:
PLUS = Q / (1-np.exp(D*Q))
EQUAL_QX = 1 / D + Q / (1 - np.exp(D * Q))
# Logical indexing:
logiQ = (x_over % Q != 0).astype(int)
x_Q = x_over - Q
y_0 = - (logiQ - 1) * EQUAL_QX
y_1 = (logiQ * x_Q) / (1 - np.exp(-D * (logiQ * x_over -Q)))+PLUS
y_over = y_0 + y_1
# Downsampling:
y = signal.decimate(y_over, 8)
# Filtering:
B = [1, -2, 1]
A = [1, -2*r1, r1**2]
y = signal.filtfilt(B, A, y)
b = 1-r2
a = [1, -r2]
y = signal.filtfilt(b, a, y)
# Normalization:
y /= np.max(np.abs(y))
return y
#%%
if __name__ == "__main__":
G=1
Q=-0.05
D=400
r1=0.97
r2=0.8
sr, data = wavfile.read("../TestGuitarPhraseMono.wav")
y = Valve(data,G,Q,D,r1,r2)
wavfile.write("example_Valve.wav", sr, y.astype(np.float32))
|
3690eb92ccb03029609dfd0ac4f38bfb363fb90e
|
zeroviral/leetcode_stuff
|
/trapping-rain-water/trapping-rain-water.py
| 1,519
| 3.75
| 4
|
class Solution:
def trap(self, height: List[int]) -> int:
'''
Setup: We need - leftmax, rightmax, left pointer, right pointer and area.
1. We will begin by declaring the variables, which are all 0 except right, which is going to close from the end.
2. While left < right, we iterate and check our positional index values.
2.1. If our left value is less than or equal to our right value, we will do the following:
2.1.1. We will update the leftmax = max(leftmax, value_at_left_index)
2.1.2. We will add to area: the abs(value_at_left_index - leftmax)
2.1.3. We will increment the right pointer.
2.2. If our rught value is smaller than the left, we will do the same as left, but with right values.
2.2.1. Update rightmax = max(rightmax, value_at_right_index)
2.2.2. Update the area to the abs(rightmax - value_at_right_index)
2.2.3. Increment the right pointer.
3. Return the area.
'''
leftmax = rightmax = left = area = 0
right = len(height) - 1
while left < right:
if height[left] <= height[right]:
leftmax = max(leftmax, height[left])
area += abs(height[left] - leftmax)
left += 1
else:
rightmax = max(height[right], rightmax)
area += abs(height[right] - rightmax)
right -= 1
return area
|
null | null | null | null | null | null | null |
No dataset card yet
- Downloads last month
- 3