日本欧洲视频一区_国模极品一区二区三区_国产熟女一区二区三区五月婷_亚洲AV成人精品日韩一区18p

代做COMP9024、代寫C++設計編程

時間:2024-04-15  來源:  作者: 我要糾錯



COMP9024 24T1 - Assignment
1/6
COMP9024 24T1 Assignment
The Missing Pages
Data Structures and Algorithms
Change Log
We may make minor changes to the spec to address/clarify some outstanding issues. These may require minimal changes in your design/code, if at all.
Students are strongly encouraged to check the online forum discussion and the change log regularly.
Version 1.0
(2024-03-15 17:00)
Initial release.
Background
As we have mentioned in lectures, the Internet can be thought of as a graph (a very large graph). Web pages represent vertices and hyperlinks represent
directed edges.
With almost 1.1 billion unique websites (as of February 2024), and each website having multiple webpages, and each webpage having multiple hyperlinks, it
can understandably be a very difficult job to remember the URL of every website you want to visit.
In order to make life easier, from the very early days of the internet, there have been search engines that can be used to find websites.
But the job of a search engine is very difficult: First it must index (create a representation of) the entire (or as close to it as possible) World Wide Web. Next it must rank the webpages it finds.
In this assignment we will be implementing algorithms to solve each of these problems, and figure out the fastest way to navigate from one page to another.
1. To index the internet we will be creating a web crawler.
2. To rank webpages we will implement the PageRank algorithm.
3. To find the shortest path between two pages we will implement Dijkstra's algorithm
The Assignment
Starter Files
Download this zip file.
Unzipping the file will create a directory called 'assn' with all the assignment start-up files.
Alternatively, you can achieve the same thing from a terminal with commands such as:
prompt$ curl https://www.cse.unsw.edu.au/~cs9024/24T1/assn/assn.zip -o assn.zip
prompt$ unzip assn.zip -d assn
The first command will download assn.zip into the current working directory, then the second command will extract it into a sub-directory assn.
You can also make note of the following URLs:
http://www.cse.unsw.edu.au/~cs9024/micro-web
http://www.cse.unsw.edu.au/~cs9024/mini-web
Here is a visual representation of the micro-web:
Once you read the assignment specification, hopefully it will be clear to you how these URLs might be useful. You may also find it useful to construct a similar
visual representation for the mini-web.
Overall File Structure
Below is a reference for each file and their purpose.
Note: You cannot modify ANY of the header (.h) files.
Provided File Description Implemented In
crawler.c A driver program to crawl the web   
dijkstra.h Interface for the Shortest Path functions (Subset 4) graph.c
graph.h Interface for the Graph ADT (Subset 1b) graph.c
list.h Interface for the List ADT (Subset 1a) list.c
Makefile A build script to compile the crawler into an executable   
pagerank.h Interface for the PageRank functions (Subset 3) graph.c
COMP9024 24T1 - Assignment
2/6
Your task will be to provide the necessary implementations to complete this project.
Subset 1 - Dependencies
Before we can start crawling we need to be able to store our crawled data. As the internet is a Graph, this means we need a Graph ADT. We will also need a Set
ADT and one of a Queue ADT or a Stack ADT, in order to perform web scraping (for a BFS or DFS).
Subset 1a - Implement the List (Queue, Stack, Set) ADT
You have been provided with a file list.h. Examine the file carefully. It provides the interface for an ADT that will provide Queue, Stack, and Set functionality.
Your task is to implement the functions prototyped in the list.h header file within list.c.
You must create the file list.c to implement this ADT.
You must store string (char *) data within the ADT.
You must allocate memory dynamically.
You must not modify the list.h file.
You must not modify the function prototypes declared in the list.h file.
You may add utility functions to the list.c file.
You may use the string.h library, and other standard libraries from the weekly exercises.
You may reuse code previously submitted for weekly assessments and provided in the lectures.
You may use whatever internal representation you like for your list ADT, provided it does not contradict any of the above.
You may assume that any instance of your list ADT will only be used as a queue or a stack or a set.
You should write programs that use your ADT to test and debug your code.
You should use valgrind to verify that your ADT does not leak memory.
As a reminder:
Queue - First In, First Out
Stack - First In, Last Out
Set - Only stores unique values.
See list.h for more information about each function that you are required to implement.
Testing
We have created a script to automatically test your list ADT. It expects to find list.c in the current working directory. Limited test cases are provided, so you
should always do your own, more thorough, testing.
prompt$ 9024 dryrun assn_list
Subset 1b - Implement the Graph ADT
You have been provided with a file graph.h. Examine the file carefully. It provides the interface for an ADT that will provide Graph functionality. The graph is
both weighted and directed.
Your task is to implement the functions prototyped in the graph.h header file within graph.c.
You must create the file graph.c to implement this ADT.
You must use an adjacency list representation, but the exact representation is up to you.
You must use string (char *) data to label the vertices.
You must allocate memory dynamically.
You must not modify the graph.h file.
You must not modify the function prototypes declared in the graph.h file.
You may add utility functions to the graph.c file.
You may use the string.h library, and other standard libraries from the weekly exercises.
You may reuse code previously submitted for weekly assessments and provided in the lectures.
You should write programs that use your ADT to test and debug your code.
You should use valgrind to verify that your ADT does not leak memory.
See graph.h for more information about each function that you are required to implement.
Subset 2 - Web Crawler
We are now going to use the list and graph ADTs you have created to implement a web crawler.
Assuming your ADTs are implemented correctly, you should be able to compile the crawler using the provided build script:
prompt$ make crawler
Note: crawler.c requires external dependencies (libcurl and libxml2). The provided Makefile will work on CSE servers (ssh and vlab), but may not
work on your home computer.
After running the executable, check that the output aligns with the navigation of the sample website.
Carefully examine the code in crawler.c. Uncomment the block of code that uses scanf to take user input for the ignore_list.
The ignore list represents the URLs that we would like to completely ignore when we are calculating PageRanks, as if they did not exist in the graph. This means that any incoming and outgoing links from these URLs are treated as non-existent. You are required to implement this functionality locally - within the
graph_show function - and NOT change the representation of the actual graph strcuture within the ADT. For further details see the graph.h file.
If you have correctly implemented the ADTs from the previous tasks, this part should be mostly free.
crawler.c is a complete implementation of a web crawler; you do not need to modify the utility functions, only the bottom part of the main function. However,
you should look at the program carefully and understand it well so that you can use it (i.e., modify it appropriately) for later tasks.
Sample Output
COMP9024 24T1 - Assignment
3/6
Using a modified crawler.c that simply calls graph_show on the micro-web, and without ignoring any pages, the output should be:
prompt$ ./crawler http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html
Enter a page to ignore or type 'done': done
http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html
http://www.cse.unsw.edu.au/~cs9024/micro-web/X.html
http://www.cse.unsw.edu.au/~cs9024/micro-web/Y.html
http://www.cse.unsw.edu.au/~cs9024/micro-web/Z.html
http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html http://www.cse.unsw.edu.au/~cs9024/micro-web/X.html 1
http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html http://www.cse.unsw.edu.au/~cs9024/micro-web/Y.html 1
http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html http://www.cse.unsw.edu.au/~cs9024/micro-web/Z.html 1
http://www.cse.unsw.edu.au/~cs9024/micro-web/X.html http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html 1
http://www.cse.unsw.edu.au/~cs9024/micro-web/Y.html http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html 1
prompt$
Now let's add index.html to the ignore list:
prompt$ ./crawler http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html
Enter a page to ignore or type 'done': http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html
Enter another page to ignore or type 'done': done
http://www.cse.unsw.edu.au/~cs9024/micro-web/X.html
http://www.cse.unsw.edu.au/~cs9024/micro-web/Y.html
http://www.cse.unsw.edu.au/~cs9024/micro-web/Z.html
prompt$
All traces of index.html have been removed. This means that only the remaining vertices are displayed as there are no longer any edges. Note that the order of
the output matters. It should follow the BFS that is performed by the crawler. If your result does not follow this order, you will be marked as incorrect, even if your
graph is valid.
Testing
We have created a script to automatically test your list and graph ADTs. It expects to find list.c and graph.c in the current working directory. Limited test
cases are provided, so you should always do your own, more thorough, testing.
prompt$ 9024 dryrun assn_crawler
Subset 3 - PageRank
Background
Now that we can crawl a web and build a graph, we need a way to determine which pages (i.e. vertices) in our web are important.
We haven't kept page content so the only metric we can use to determine the importance of a page is to check how much other pages rely on its existence. That
is, how easy is it to follow a sequence of one or more links (edges) and end up on the page.
In 1998, Larry Page and Sergey Brin (a.k.a. Google), created the PageRank algorithm to evaluate this metric.
Google still uses the PageRank algorithm to score every page it indexes on the internet to help order its search results.
Task
In graph.c implement the two new functions graph_pagerank and graph_show_pagerank.
First, graph_pagerank should calculate and store the PageRank of each vertex (i.e. page) in the graph.
The algorithm must exclude the URLs that are provided in an 'ignore list' to the function. Do not remove the pages from the graph, only skip (i.e., ignore) them
from calculations. This means that you will need to understand which parts of the PageRank algorithm need to be modified.
Using the ignore list, you will be able to see what happens to the PageRanks as certain pages are removed. What should happen to the PageRank of a
particular page if you remove all pages linking to it?
Second, graph_show_pagerank should print the PageRank of every vertex (i.e. page) in the graph that is NOT in the ignore list.
Pages (vertices) should be printed from highest to lowest rank, based on their rounded (to 3 d.p.) rank. You should use the round function from the math.h
library. If two pages have the same rounded rank then they should be printed lexiographically.
You may add more utility functions to graph.c.
You may (and most likely will need to) modify your struct definitions in graph.c.
You must not modify the file graph.h.
You must not modify the file pagerank.h.
You must not modify the function prototypes for graph_pagerank and graph_show_pagerank.
Algorithm
For :
for :
Where:
is the number of vertices
and are each some vertex
is the "time" (iteration count)
t = 0
PR(pi;t) =
1
N
t > 0
PR(pi;t) =
1 ? d
N
+ d    ((   
pj  M(pi)
PR(pj;t ? 1)
D(pj)
) + (  
pj  S
PR(pj;t ? 1)
N
))
N
pi pj
t
COMP9024 24T1 - Assignment
4/6
is the PageRank of vertex at some time
is the damping_factor
is the set of vertices that have an outbound edge towards
is the PageRank of vertex at some time
is the degree of vertex , ie. the number of outbound edges of vertex
is the set of sinks, ie. the set of vertices with no outbound edges, ie. where is 0
This formula is equivalent to the following algorithm:
procedure graph_pagerank(G, damping_factor, epsilon)
N = number of vertices in G
for all V in vertices of G
oldrank of V = 0
pagerank of V = 1 / N
end for
while |pagerank of V - oldrank of V| of any V in vertices of G > epsilon
for all V in vertices of G
oldrank of V = pagerank of V
end for
sink_rank = 0
for all V in vertices of G that have no outbound edges
sink_rank = sink_rank + (damping_factor * (oldrank of V / N))
end for
for all V in vertices of G
pagerank of V = sink_rank + ((1 - damping_factor) / N)
for all I in vertices of G that have an edge from I to V
pagerank of V = pagerank of V + ((damping_factor * oldrank of I) / number of outbound edges from I)
end for
end for
end while
end procedure
In order to test your PageRank functions, you should modify crawler.c to #include "pagerank.h", and change the last part of the main function to
something like:
...
graph_show(network, stdout, ignore_list);
graph_pagerank(network, damping, epsilon, ignore_list);
graph_show_pagerank(network, stdout, ignore_list);
list_destroy(ignore_list);
graph_destroy(network);
where you choose appropriate values for damping and epsilon.
Again, it is noted that the changes you make to crawler.c are purely for you to test whether your PageRank functions are working. We will use a different
crawler.c for testing your PageRank functions.
Sample Output
Here we're using a modified crawler.c that calculates graph_pagerank and prints graph_show_pagerank. Damping has been set to 0.85 and epsilon to
0.00001. For the micro-web, and without ignoring any pages, the output should be:
prompt$ ./crawler http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html
Enter a page to ignore or type 'done': done
http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html: 0.412
http://www.cse.unsw.edu.au/~cs9024/micro-web/X.html: 0.196
http://www.cse.unsw.edu.au/~cs9024/micro-web/Y.html: 0.196
http://www.cse.unsw.edu.au/~cs9024/micro-web/Z.html: 0.196
prompt$
Now let's add index.html to the ignore list:
prompt$ ./crawler http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html
Enter a page to ignore or type 'done': http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html
Enter another page to ignore or type 'done': done
http://www.cse.unsw.edu.au/~cs9024/micro-web/X.html: 0.333
http://www.cse.unsw.edu.au/~cs9024/micro-web/Y.html: 0.333
http://www.cse.unsw.edu.au/~cs9024/micro-web/Z.html: 0.333
prompt$
X.html, Y.html and Z.html have no connections anymore and as such have the same ranks. Note that the sum is still (approximately) equal to 1, and N, the
number of vertices, is equal to 3 in this case, since there were a total of 4 nodes originally, and 1 of the nodes has been ignored.
Testing
We have created a script to automatically test your PageRank functions. It expects to find list.c and graph.c in the current working directory. Limited test
cases are provided, so you should always do your own, more thorough, testing.
prompt$ 9024 dryrun assn_rankings
Subset 4 - Degrees of Separation (Shortest Path)
In graph.c, implement the two functions prototyped in dijkstra.h: graph_shortest_path and graph_show_path.
First, graph_shortest_path should calculate the shortest path between a source vertex and all other vertices.
graph_shortest_path should use Dijkstra's algorithm to do so.
PR(pi;t) pi t
d
M(pi) M(pi)
PR(pj
;t ? 1) pj t ? 1
D(pj) pj pj
S D(pj)
COMP9024 24T1 - Assignment
5/6
Note that an ignore list is also passed to graph_shortest_path. Similar to above, you will need to ensure these URLs are treated as non-existent. For
example if there was a path A->B->C, but B is ignored, then there is no path from A to C.
Unlike a regular implementation of Dijkstra's algorithm, your code should minimise the number of edges in the path (not minimise the total weight of the path -
consider each edge's weight to be 1).
Second, graph_show_path should print the path from the previously given source vertex to a given destination vertex. With the ignore list, there can be no
path between two vertices. In this case, output nothing.
You may add more utility functions to graph.c.
You may (and most likely will need to) extend your struct definitions in graph.c.
You must not modify the file dijkstra.h.
You must not modify the file pagerank.h.
You must not modify the file graph.h.
You must not modify the function prototypes for graph_shortest_path and graph_show_path.
In order to test your Dijkstra functions, you should modify crawler.c to #include "dijkstra.h", and change the last part of the main function to
something like:
...
graph_show(network, stdout, ignore_list);
graph_shortest_path(network, argv[1], ignore_list);
char destination[BUFSIZ];
printf("destination: ");
scanf("%s", destination);
graph_show_path(network, stdout, destination, ignore_list);
list_destroy(ignore_list);
graph_destroy(network);
The changes you make to crawler.c are purely for you to test whether your Dijkstra functions are working. We will use a different crawler.c for testing your
Dijkstra functions.
Sample Output
Using a modified crawler.c that accepts a source page as a command line argument from which to calculate graph_shortest_path, and a destination
page to output graph_show_path, for the micro-web, and without ignoring any pages, the output in tracing a path from X.html to Z.html should be:
prompt$ ./crawler http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html http://www.cse.unsw.edu.au/~cs9024/micro-web/
destination: http://www.cse.unsw.edu.au/~cs9024/micro-web/Z.html
Enter a page to ignore or type 'done': done
http://www.cse.unsw.edu.au/~cs9024/micro-web/X.html
-> http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html
-> http://www.cse.unsw.edu.au/~cs9024/micro-web/Z.html
prompt$
Now let's add index.html to the ignore list:
prompt$ ./crawler http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html http://www.cse.unsw.edu.au/~cs9024/micro-web/
destination: http://www.cse.unsw.edu.au/~cs9024/micro-web/Z.html
Enter a page to ignore or type 'done': http://www.cse.unsw.edu.au/~cs9024/micro-web/index.html
Enter another page to ignore or type 'done': done
prompt$
Since index.html has been ignored, the path cannot be completed and nothing is returned. Your algorithm should iterate vertices/pages in the same order as the
crawler. This ensures that when your algorithm finds the shortest path, it will return the first path it would encounter from the BFS in the crawler. If your result
does not follow this order, you will be marked as incorrect, even if your path is valid.
Testing
We have created a script to automatically test your shortest path algorithm. It expects to find list.c and graph.c in the current working directory. Limited test
cases are provided, so you should always do your own, more thorough, testing.
prompt$ 9024 dryrun assn_path
Assessment
Due Date
Wednesday, 17 April, 11:59:59.
Late Penalty:
The UNSW standard late penalty for assessment is 5% per day for 5 days - this is implemented hourly for this assignment.
Each hour your assignment is submitted late reduces its mark by 0.2%.
For example, if an assignment worth 60% was submitted 10 hours late, it would be awarded 58.8%.
Beware - submissions more than 5 days late will not be accepted and will receive zero marks. This again is the UNSW standard assessment policy.
Submission
You should submit your list.c and graph.c files using the following give command:
prompt$ give cs9024 assn list.c graph.c
Alternatively, you can select the option to "Make Submission" at the top of this page to submit directly through WebCMS3.
COMP9024 24T1 - Assignment
6/6
Important notes:
Make sure you spell all filenames correctly.
You can run give multiple times. Only your last submission will be marked.
Ensure both files are submitted together. If you separate them across multiple submissions, each submission will replace the previous one. Whether you submit through the command line or WebCMS3, it is your responsibility to ensure it reports a successful submission. Failure to submit
correctly will not be considered as an excuse.
You cannot obtain marks by e-mailing your code to tutors or lecturers.
Assessment Scheme
This assignment will contribute 12 marks to your final COMP9024 mark.
11 marks will come from automated testing, and 1 mark will come from manual inspection of your code.
The specific breakdown of marks is as follows:
Description Marks
List ADT 3
Graph ADT 3
PageRank 2
Shortest Path 2
Memory Management 1
Code Quality 1
Total 12
Important:
Any submission that does not allow us to follow the aforementioned marking procedure "normally" (e.g., missing files, compile or run-time errors) may
result in delays in marking your submission. Depending on the severity of the errors/problems, we may ask you to resubmit (with max late penalty) or
assess your written code instead (e.g., for some "effort" marks only).
Ensure your submitted code compiles on a CSE machine using the standard options -Wall -Werror.
Memory management will be assessed using valgrind. You may refer to the Week 4 Practical for guidance on how you can compile your code and run it
through valgrind. Note, this will require you to write some sort of "driver" or "test" program for your ADT.
Code quality will be assessed on:
Readability - your code is generally easy to understand, follows typical spacing and indentation, and uses a consistent style.
Documentation - your code is documented in places where it is harder to understand.
While you are not required to follow it, you may refer to the CSE C Coding Style Guide.
Collection
Once marking is complete you can collect your submission using the following command:
prompt$ 9024 classrun -collect assn
You can also view your marks using the following command:
prompt$ 9024 classrun -sturec
You can also collect your submission directly through WebCMS3 from the "Collect Submission" tab at the top of this page.
Plagiarism
Group submissions will not be allowed. Your programs must be entirely your own work. Plagiarism detection software will be used to compare all submissions
pairwise (including submissions for similar assessments in previous years, if applicable) and serious penalties will be applied, including an entry on UNSW's
plagiarism register.
Do not copy ideas or code from others
Do not use a publicly accessible repository or allow anyone to see your code
Please refer to the on-line sources to help you understand what plagiarism is and how it is dealt with at UNSW:
Plagiarism and Academic Integrity
UNSW Plagiarism Policy Statement
UNSW Plagiarism Procedure
Copyright
Reproducing, publishing, posting, distributing or translating this assignment is an infringement of copyright and will be referred to UNSW Student Conduct and
Integrity for action.

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp






 

標簽:

掃一掃在手機打開當前頁
  • 上一篇:代寫 CS6114 Coding Video for Streaming
  • 下一篇:COMP3310代做、代寫C++, Java/Python編程
  • 無相關信息
    昆明生活資訊

    昆明圖文信息
    蝴蝶泉(4A)-大理旅游
    蝴蝶泉(4A)-大理旅游
    油炸竹蟲
    油炸竹蟲
    酸筍煮魚(雞)
    酸筍煮魚(雞)
    竹筒飯
    竹筒飯
    香茅草烤魚
    香茅草烤魚
    檸檬烤魚
    檸檬烤魚
    昆明西山國家級風景名勝區
    昆明西山國家級風景名勝區
    昆明旅游索道攻略
    昆明旅游索道攻略
  • NBA直播 短信驗證碼平臺 幣安官網下載 歐冠直播 WPS下載

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 kmw.cc Inc. All Rights Reserved. 昆明網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    日本欧洲视频一区_国模极品一区二区三区_国产熟女一区二区三区五月婷_亚洲AV成人精品日韩一区18p

              9000px;">

                        蜜臀av性久久久久蜜臀aⅴ| 色狠狠综合天天综合综合| 91麻豆精品在线观看| 亚洲午夜影视影院在线观看| 高清久久久久久| 日韩在线播放一区二区| 久久午夜电影网| 欧美成人艳星乳罩| 一区二区三区四区不卡在线| 视频在线观看一区| 欧美精品成人一区二区三区四区| 精品日韩一区二区三区免费视频| 亚洲欧美自拍偷拍| 成人欧美一区二区三区黑人麻豆| 日韩欧美在线观看一区二区三区| 爽好多水快深点欧美视频| 成人丝袜高跟foot| |精品福利一区二区三区| 欧美无砖专区一中文字| 欧美a级理论片| 国产精品美女久久久久久久久| 色av综合在线| 国产精品一区二区三区乱码| 亚洲激情av在线| 亚洲精品在线观看网站| 91视频免费播放| 国产一区 二区 三区一级| 亚洲男人天堂av| 欧美激情一区二区三区四区 | 在线视频国内自拍亚洲视频| 欧美激情一区二区三区全黄| 亚洲精品中文在线| 免费三级欧美电影| 成人免费电影视频| 欧美色视频一区| 国产精品成人免费在线| 久久网站最新地址| 在线观看av一区二区| 亚洲不卡一区二区三区| 日韩亚洲欧美一区二区三区| 国产精品一级片在线观看| 久久综合久久综合久久综合| 日韩亚洲欧美在线| 日韩女优制服丝袜电影| 一区二区三区四区在线免费观看| 色欧美日韩亚洲| 欧美日韩精品一区二区三区| 国产精品国产精品国产专区不片| 欧美人动与zoxxxx乱| 亚洲一区日韩精品中文字幕| 另类中文字幕网| 国产精品69久久久久水密桃| 亚洲国产成人tv| 精品午夜久久福利影院| 国产一区999| 欧美日本乱大交xxxxx| 国产清纯美女被跳蛋高潮一区二区久久w | 色8久久人人97超碰香蕉987| 在线视频国内自拍亚洲视频| 欧美日韩精品一区二区在线播放 | 欧美一区二区三区电影| 精品国产乱码久久久久久影片| 欧美成人r级一区二区三区| 中文av一区特黄| 一区在线观看免费| 国产精品久久一卡二卡| 国产精品伊人色| 亚洲图片欧美激情| 日韩一级二级三级精品视频| 亚洲精品国产成人久久av盗摄 | 国产人成亚洲第一网站在线播放| 久久亚洲一区二区三区明星换脸| 欧美一区二区在线不卡| 欧美日韩精品一区视频| 久久精品免视看| 日韩精品免费专区| 国产成人综合视频| 欧美视频一区二区三区| 蜜臀久久久99精品久久久久久| 日韩 欧美一区二区三区| 91亚洲国产成人精品一区二三| 精品国产一二三| 国产传媒一区在线| 日韩av中文字幕一区二区三区| 国产老肥熟一区二区三区| 欧美精品在线观看播放| 麻豆视频一区二区| 色综合天天视频在线观看| 欧美国产成人在线| 成人av在线资源网| 久久五月婷婷丁香社区| 国产福利一区在线观看| 欧美www视频| 国产aⅴ精品一区二区三区色成熟| 日本一区二区免费在线| 91成人免费在线视频| 久久av中文字幕片| 一区二区三区欧美激情| 精品第一国产综合精品aⅴ| 色综合天天综合| 日本欧美一区二区| 免费精品99久久国产综合精品| 欧美性受xxxx| 欧亚一区二区三区| 日韩视频永久免费| 国产精品欧美久久久久无广告| 在线不卡一区二区| 久久综合999| 亚洲精品视频在线看| 国产欧美久久久精品影院| 久久99精品久久久| 亚洲欧美日韩成人高清在线一区| 岛国一区二区在线观看| 亚洲在线中文字幕| 欧美日韩在线免费视频| 亚洲综合色在线| 国产·精品毛片| 26uuu国产日韩综合| 亚洲国产日产av| 久久久亚洲精华液精华液精华液| 国产精品久久久久久久久晋中| 欧美性感一类影片在线播放| 亚洲天堂网中文字| 91丨porny丨在线| 91丝袜呻吟高潮美腿白嫩在线观看| 国产一区啦啦啦在线观看| 精品一区二区三区免费播放| 成人做爰69片免费看网站| 不卡一区在线观看| 欧美午夜精品久久久| 日本一区二区三区在线观看| √…a在线天堂一区| 久久精品国产一区二区| 成人免费黄色在线| 欧美久久一区二区| 中文字幕一区在线| 美女视频网站久久| 91久久精品一区二区| 国产片一区二区三区| 国产精品丝袜一区| 国产在线视频不卡二| 欧美理论片在线| 国产精品美女久久久久久久| 国产综合色视频| 久久99热狠狠色一区二区| 欧美性一区二区| 奇米777欧美一区二区| 国产区在线观看成人精品| av网站免费线看精品| 日韩专区欧美专区| 亚洲欧洲性图库| 欧美va亚洲va| 精品亚洲porn| 欧美精品一卡两卡| 国产偷国产偷亚洲高清人白洁 | 亚洲免费av在线| 蜜臀久久99精品久久久久宅男| 成人福利电影精品一区二区在线观看| 色综合一区二区三区| 欧美午夜精品理论片a级按摩| 欧美精品九九99久久| 亚洲午夜精品网| 亚洲人精品午夜| 久久9热精品视频| 欧美aaa在线| 国产高清无密码一区二区三区| 欧美日韩国产精选| 国产精品一色哟哟哟| 国产精品久久久久影院| 欧美人妖巨大在线| 国产福利不卡视频| 日韩在线一二三区| 亚洲男人的天堂在线aⅴ视频| 91精品国产综合久久久久久久 | 午夜欧美视频在线观看| 日韩亚洲欧美一区二区三区| 国产成人自拍网| 综合在线观看色| 欧美电影免费观看高清完整版| 国产精品2024| 狠狠色狠狠色综合系列| 国产精品初高中害羞小美女文| 成人性生交大片免费看在线播放 | 蜜臀av性久久久久蜜臀aⅴ四虎| 亚洲国产成人午夜在线一区| 欧美三级三级三级| 91在线视频免费观看| 99在线热播精品免费| 成人av午夜电影| 91网上在线视频| 美国毛片一区二区三区| 首页国产欧美日韩丝袜| 日韩精品一区二| 成人av在线观| 亚洲成av人片在线观看无码| 精品久久久久99| 在线播放中文一区| 99re在线精品| 国产乱人伦偷精品视频不卡| 91.xcao|